Search Results

Search found 1814 results on 73 pages for 'christopher house'.

Page 70/73 | < Previous Page | 66 67 68 69 70 71 72 73  | Next Page >

  • Workshops, online content show how Oracle infuses simplicity, mobility, extensibility into user experience

    - by mvaughan
    By Kathy Miedema & Misha Vaughan, Oracle Applications User Experience Oracle has made a huge investment into the user experience of its many different software product families, and recent releases showcase big changes and features that aim to promote end user engagement and efficiency by streamlining navigation and simplifying the user interface. But making Oracle’s enterprise software great-looking and usable doesn’t stop when Oracle products go out the door. The Applications User Experience (UX) team recognizes that our customers may need to customize software to fit their work processes. And that’s why we provide tools such as user experience design patterns to help you maintain the Oracle user experience as you tailor your application to fit your business needs. Often, however, customers may need some context around user experience. How has the Oracle user experience been designed and constructed? Why is a good user experience important for users? How does understanding what goes into the user experience benefit the people who purchase the software for users? There’s a short answer to these questions, and you can read about it on Usable Apps. But truly understanding Oracle’s investment and seeing how it applies across product families occasionally requires a deeper dive into the Oracle user experience, especially if you’re an influencer or decision-maker about Oracle products. To help frame these decisions, the Communications & Outreach team has developed several targeted workshops that explore what Oracle means when it talks about user experience, and provides a roadmap into where the Oracle user experience is going. These workshops require non-disclosure agreements, and have been delivered to Oracle sales folks, Oracle partners, Oracle ACE Directors and ACEs, and a few customers. Some of these audience members have been developers or have a technical background; just as many did not. Here’s a breakdown of the kind of training you can get around the Oracle user experience from the OAUX Communications & Outreach team.For Partners: George Papazzian, Principal, Naviscent with Joyce Ohgi, Oracle Oracle Fusion Applications HCM Pre-Sales Seminar:  In concert with Worldwide Alliances  and  Channels under Applications Partner Enablement Director Jonathan Vinoskey’s guidance, the Applications User Experience team delivers a two-day workshop.  Day one focuses on Oracle Fusion Applications HCM and pre-sales strategy, and Day two focuses on positioning and leveraging Oracle’s investment in the Oracle Fusion Applications user experience.  The next workshops will occur on the following dates: December 4-5, 2013 @ Manchester, UK January 29-30, 2014 @ Reston, Virginia February 2014 @ Guadalajara, Mexico (email: Shannon Whiteman) March 11-12, 2014 @ Dubai, United Arab Emirates April 1-2, 2014 @ Chicago, Illinois Partner Advisory Board: A two-day board meeting in the U.S. and U.K. to discuss four main user experience areas for Oracle Fusion Applications: simplicity, visualization & analytics, mobility, & futures. This event is limited to Oracle Diamond Partners, UX bloggers, and key UX influencers and requires legal documentation.  We will be talking about the Oracle applications UX strategy and roadmap. Partner Implementation Training on User Interface: How to Build Great-Looking, Usable Apps:  In this two-day, hands-on workshop built around Oracle’s Application Development Framework, learn how to build desktop and mobile user interfaces and mobile user interfaces based on Oracle’s experience with Fusion Applications. This workshop is for partners with a technology background who are looking for ways to tailor Fusion Applications using ADF, or have built their own custom solutions using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs. Nov 5-6, 2013 @ Redwood Shores, CA, USA January 28-29th, 2014 @ Reston, Virginia, USA February 25-26, 2014 @ Guadalajara, Mexico March 9-10, 2014 @ Dubai, United Arab Emirates To register, contact [email protected] Simplified UI Customization & Extensibility:  Pilot workshop:  We will be reviewing the proposed content for communicating the user experience tool kit available with the next release of Oracle Fusion Applications.  Our core focus will be on what toolkit components our system implementors and independent software vendors will need to respond to customer demand, whether they are extending Fusion Applications, or building custom applications, that will need to leverage the simplified UI. Dec 11th, 2013 @ Reading, UK For information: contact [email protected] Private lab tour and demos: Interested in seeing what’s going on in the Apps UX Labs?  If you are headed to the San Francisco Bay Area, let us know. We can arrange a spin through our usability labs at headquarters. OAUX Expo: This open-house forum gives partners a look at what the UX team is working on, and showcases the next-generation user experiences in a demo environment where attendees can see and touch the applications. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For CustomersAngela Johnston, Gozel Aamoth, Teena Singh, and Yen Chan, Oracle Lab tours: See demos of soon-to-be-released products, and take a spin on usability research equipment such as our eye-tracker. Watch this video to get an idea of what you’ll see. Get our newsletter: Learn about newly released products and see where you can meet us at user group conferences. Participate in a feedback session: Join a focus group or customer feedback session to get an early look at user experience designs for the next generation of software, and provide your thoughts on how well it will work. Join the OUAB: The Oracle Usability Advisory Board meets several times a year to discuss trends in the workforce and provide direction on user experience designs. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Developers (customers, partners, and consultants): Plinio Arbizu, SP Solutions, Richard Bingham, Oracle, Balaji Kamepalli, EiSTechnoogies, Praveen Pillalamarri, EiSTechnologies How to Build Great-Looking, Usable Apps: This workshop is for attendees with a strong technology background who are looking for ways to tailor customer software using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs.  See above for dates and times. UX design patterns web site: Cut the length of your project down by months. Use these patterns to build out the task flow you need to develop for your users. The patterns have already been usability-tested and represent the best practices that the Oracle UX research team has found in its studies. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Oracle Sales Mike Klein, Jeremy Ashley, Brent White, Oracle Contact your local sales person for more information about the Oracle user experience and the training available from the Applications User Experience Communications & Outreach team. See customer-friendly user experience collateral ranging from the new simplified UI in Oracle Fusion Applications Release 7, to E-Business Suite user experience highlights, to Siebel, PeopleSoft, and JD Edwards user experience highlights.   Receive access to the same pre-sales and implementation training we provide to partners. For Oracle Sales only: Oracle-only training on the Oracle Fusion Applications UX Innovation Sales Kit.

    Read the article

  • Integrate Google Wave With Your Windows Workflow

    - by Matthew Guay
    Have you given Google Wave a try, only to find it difficult to keep up with?  Here’s how you can integrate Google Wave with your desktop and workflow with some free and simple apps. Google Wave is an online web app, and unlike many Google services, it’s not easily integrated with standard desktop applications.  Instead, you’ll have to keep it open in a browser tab, and since it is one of the most intensive HTML5 webapps available today, you may notice slowdowns in many popular browsers.  Plus, it can be hard to stay on top of your Wave conversations and collaborations by just switching back and forth between the website and whatever else you’re working on.  Here we’ll look at some tools that can help you integrate Google Wave with your workflow, and make it feel more native in Windows. Use Google Wave Directly in Windows What’s one of the best ways to make a web app feel like a native application?  By making it into a native application, of course!  Waver is a free Air powered app that can make the mobile version of Google Wave feel at home on your Windows, Mac, or Linux desktop.  We found it to be a quick and easy way to keep on top of our waves and collaborate with our friends. To get started with Waver, open their homepage on the Adobe Air Marketplace (link below) and click Download From Publisher. Waver is powered by Adobe Air, so if you don’t have Adobe Air installed, you’ll need to first download and install it. After clicking the link above, Adobe Air will open a prompt asking what you wish to do with the file.  Click Open, and then install as normal. Once the installation is finished, enter your Google Account info in the window.   After a few moments, you’ll see your Wave account in miniature, running directly in Waver.  Click a Wave to view it, or click New wave to start a new Wave message.  Unfortunately, in our tests the search box didn’t seem to work, but everything else worked fine. Google Wave works great in Waver, though all of the Wave features are not available since it is running the mobile version of Wave. You can still view content from plugins, including YouTube videos, directly in Waver.   Get Wave Notifications From Your Windows Taskbar Most popular email and Twitter clients give you notifications from your system tray when new messages come in.  And with Google Wave Notifier, you can now get the same alerts when you receive a new Wave message. Head over to the Google Wave Notifier site (link below), and click the download link to get started.  Make sure to download the latest Binary zip, as this one will contain the Windows program rather than the source code. Unzip the folder, and then run GoogleWaveNotifier.exe. On first run, you can enter your Google Account information.  Notice that this is not a standard account login window; you’ll need to enter your email address in the Username field, and then your password below it. You can also change other settings from this dialog, including update frequency and whether or not to run at startup.  Click the value, and then select the setting you want from the dropdown menu. Now, you’ll have a new Wave icon in your system tray.  When it detects new Waves or unread updates, it will display a popup notification with details about the unread Waves.  Additionally, the icon will change to show the number of unread Waves.  Click the popup to open Wave in your browser.  Or, if you have Waver installed, simply open the Waver window to view your latest Waves. If you ever need to change settings again in the future, right-click the icon and select Settings, and then edit as above. Get Wave Notifications in Your Email  Most of us have Outlook or Gmail open all day, and seldom leave the house without a Smartphone with push email.  And thanks to a new Wave feature, you can still keep up with your Waves without having to change your workflow. To activate email notifications from Google Wave, login to your Wave account, click the arrow beside your Inbox, and select Notifications. Select how quickly you want to receive notifications, and choose which email address you wish to receive the notifications.  Click Save when you’re finished. Now you’ll receive an email with information about new and updated Waves in your account.  If there were only small changes, you may get enough info directly in the email; otherwise, you can click the link and open that Wave in your browser. Conclusion Google Wave has great potential as a collaboration and communications platform, but by default it can be hard to keep up with what’s going on in your Waves.  These apps for Windows help you integrate Wave with your workflow, and can keep you from constantly logging in and checking for new Waves.  And since Google Wave registration is now open for everyone, it’s a great time to give it a try and see how it works for yourself. Links Signup for Google Wave (Google Account required) Download Waver from the Adobe Air Marketplace Download Google Wave Notifier Similar Articles Productive Geek Tips We Have 20 Google Wave Invites. Want One?Tired of Waiting for Google Wave? Try ShareFlow NowIntegrate Google Docs with Outlook the Easy WayAwesome Desktop Wallpapers: The Windows 7 EditionWeek in Geek: The Stupid Geek Tricks to Hide Extra Windows Edition TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Default Programs Editor – One great tool for Setting Defaults Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer

    Read the article

  • Single-Signon options for Exchange 2010

    - by freiheit
    We're working on a project to migrate employee email from Unix/open-source (courier IMAP, exim, squirrelmail, etc) to Exchange 2010, and trying to figure out options for single-signon for Outlook Web Access. So far all the options I've found are very ugly and "unsupportable", and may simply not work with Forefront. We already have JA-SIG CAS for token-based single-signon and Shibboleth for SAML. Users are directed to a simple in-house portal (a Perl CGI, really) that they use to sign in to most stuff. We have an HA OpenLDAP cluster that's already synchronized against another AD domain and will be synchronized with the AD domain Exchange will be using. CAS authenticates against LDAP. The portal authenticates against CAS. Shibboleth authenticates with CAS but pulls additional data from LDAP. We're moving in the direction of having web services authenticate against CAS or Shibboleth. (Students are already on SAML/Shibboleth authenticated Google Apps for Education) With Squirrelmail we have a horrible hack linked to from that portal page that authenticates against CAS, gets your original plaintext password (yes, I know, evil), and gives you an HTTP form pre-filled with all the necessary squirrelmail login details with javaScript onLoad stuff to immediately submit the form. Trying to find out exactly what is possible with Exchange/OWA seems to be difficult. "CAS" is both the acronym for our single-signon server and an Exchange component. From what I've been able to tell there's an addon for Exchange that does SAML, but only for federating things like free/busy calendar info, not authenticating users. Plus it costs additional money so there's no way to experiment with it to see if it can be coaxed into doing what we want. Our plans for the Exchange cluster involve Forefront Threat Management Gateway (the new ISA) in the DMZ front-ending the CAS servers. So, the real question: Has anybody managed to make Exchange authenticate with CAS (token-based single-signon) or SAML, or with something I can reasonably likely make authenticate with one of those (such as anything that will accept apache's authentication)? With Forefront? Failing that, anybody have some tips on convincing OWA Forms Based Authentication (FBA) into letting us somehow "pre-login" the user? (log in as them and pass back cookies to the user, or giving the user a pre-filled form that autosubmits like we do with squirrelmail). This is the least-favorite option for a number of reasons, but it would (just barely) satisfy our requirements. From what I hear from the guy implementing Forefront, we may have to set OWA to basic authentication and do forms in Forefront for authentication, so it's possible this isn't even possible. I did find CasOwa, but it only mentions Exchange 2007, looks kinda scary, and as near as I can tell is mostly the same OWA FBA hack I was considering slightly more integrated with the CAS server. It also didn't look like many people had had much success with it. And it may not work with Forefront. There's also "CASifying Outlook Web Access 2", but that one scares me, too, and involves setting up a complex proxy config, which seems more likely to break. And, again, doesn't look like it would work with Forefront. Am I missing something with Exchange SAML (OWA Federated whatchamacallit) where it is possible to configure to do user authentication and not just free/busy access authorization?

    Read the article

  • The Great Divorce

    - by BlackRabbitCoder
    I have a confession to make: I've been in an abusive relationship for more than 17 years now.  Yes, I am not ashamed to admit it, but I'm finally doing something about it. I met her in college, she was new and sexy and amazingly fast -- and I'd never met anything like her before.  Her style and her power captivated me and I couldn't wait to learn more about her.  I took a chance on her, and though I learned a lot from her -- and will always be grateful for my time with her -- I think it's time to move on. Her name was C++, and she so outshone my previous love, C, that any thoughts of going back evaporated in the heat of this new romance.  She promised me she'd be gentle and not hurt me the way C did.  She promised me she'd clean-up after herself better than C did.  She promised me she'd be less enigmatic and easier to keep happy than C was.  But I was deceived.  Oh sure, as far as truth goes, it wasn't a complete lie.  To some extent she was more fun, more powerful, safer, and easier to maintain.  But it just wasn't good enough -- or at least it's not good enough now. I loved C++, some part of me still does, it's my first-love of programming languages and I recognize its raw power, its blazing speed, and its improvements over its predecessor.  But with today's hardware, at speeds we could only dream to conceive of twenty years ago, that need for speed -- at the cost of all else -- has died, and that has left my feelings for C++ moribund. If I ever need to write an operating system or a device driver, then I might need that speed.  But 99% of the time I don't.  I'm a business-type programmer and chances are 90% of you are too, and even the ones who need speed at all costs may be surprised by how much you sacrifice for that.   That's not to say that I don't want my software to perform, and it's not to say that in the business world we don't care about speed or that our job is somehow less difficult or technical.  There's many times we write programs to handle millions of real-time updates or handle thousands of financial transactions or tracking trading algorithms where every second counts.  But if I choose to write my code in C++ purely for speed chances are I'll never notice the speed increase -- and equally true chances are it will be far more prone to crash and far less easy to maintain.  Nearly without fail, it's the macro-optimizations you need, not the micro-optimizations.  If I choose to write a O(n2) algorithm when I could have used a O(n) algorithm -- that can kill me.  If I choose to go to the database to load a piece of unchanging data every time instead of caching it on first load -- that too can kill me.  And if I cross the network multiple times for pieces of data instead of getting it all at once -- yes that can also kill me.  But choosing an overly powerful and dangerous mid-level language to squeeze out every last drop of performance will realistically not make stock orders process any faster, and more likely than not open up the system to more risk of crashes and resource leaks. And that's when my love for C++ began to die.  When I noticed that I didn't need that speed anymore.  That that speed was really kind of a lie.  Sure, I can be super efficient and pack bits in a byte instead of using separate boolean values.  Sure, I can use an unsigned char instead of an int.  But in the grand scheme of things it doesn't matter as much as you think it does.  The key is maintainability, and that's where C++ failed me.  I like to tell the other developers I work with that there's two levels of correctness in coding: Is it immediately correct? Will it stay correct? That is, you can hack together any piece of code and make it correct to satisfy a task at hand, but if a new developer can't come in tomorrow and make a fairly significant change to it without jeopardizing that correctness, it won't stay correct. Some people laugh at me when I say I now prefer maintainability over speed.  But that is exactly the point.  If you focus solely on speed you tend to produce code that is much harder to maintain over the long hall, and that's a load of technical debt most shops can't afford to carry and end up completely scrapping code before it's time.  When good code is written well for maintainability, though, it can be correct both now and in the future. And you know the best part is?  My new love is nearly as fast as C++, and in some cases even faster -- and better than that, I know C# will treat me right.  Her creators have poured hundreds of thousands of hours of time into making her the sexy beast she is today.  They made her easy to understand and not an enigmatic mess.  They made her consistent and not moody and amorphous.  And they made her perform as fast as I care to go by optimizing her both at compile time and a run-time. Her code is so elegant and easy on the eyes that I'm not worried where she will run to or what she'll pull behind my back.  She is powerful enough to handle all my tasks, fast enough to execute them with blazing speed, maintainable enough so that I can rely on even fairly new peers to modify my work, and rich enough to allow me to satisfy any need.  C# doesn't ask me to clean up her messes!  She cleans up after herself and she tries to make my life easier for me by taking on most of those optimization tasks C++ asked me to take upon myself.  Now, there are many of you who would say that I am the cause of my own grief, that it was my fault C++ didn't behave because I didn't pay enough attention to her.  That I alone caused the pain she inflicted on me.  And to some extent, you have a point.  But she was so high maintenance, requiring me to know every twist and turn of her vast and unrestrained power that any wrong term or bout of forgetfulness was met with painful reminders that she wasn't going to watch my back when I made a mistake.  But C#, she loves me when I'm good, and she loves me when I'm bad, and together we make beautiful code that is both fast and safe. So that's why I'm leaving C++ behind.  She says she's changing for me, but I have no interest in what C++0x may bring.  Oh, I'll still keep in touch, and maybe I'll see her now and again when she brings her problems to my door and asks for some attention -- for I always have a soft spot for her, you see.  But she's out of my house now.  I have three kids and a dog and a cat, and all require me to clean up after them, why should I have to clean up after my programming language as well?

    Read the article

  • Issues with Verizon's "Network Extender" device talking on my home network.

    - by Logan
    I recently switched my phone service to Verizon from ATT, and I get somewhat spotty service in my house. I called them and they sent me a "network extender" device for free. Its a femtocell that connects to my home network. The directions that come with it are very dumbed down, basically just say to connect it to your router and put it near a window (so it can get a GPS signal, it has to make sure its within the correct area before operating). The problem I'm having is the network light on it stays red. The troubleshooting information that came with it tells me this means there is a bad network connection. Its connected through an ASUS router running DD-WRT. No other devices on my network have a problem with it, including a Western Digital WDLIVE device, mine and my wife's cell phones (via wifi), a Wii, and an Xbox. If I connect the device directly to my cable modem, the light goes blue (which means good) and it starts working. So this tells me that its definately a configuration issue with my router. Verizon basically washed their hands of me when I connected it to my cable modem, and told me that its a router issue and to try a different router. Because normal people just have extra routers laying around their houses... When I connect it to the router, I can watch the DHCP Clients list on the status page, and the MAC of the network extender quickly fills up the clients list, grabbing every available DHCP address. Its like it grabs an address, can't connect to the internet, releases it, grabs another, then another, then another. So in the DHCP server settings I assigned a static IP to its MAC. This made it quit doing what it was doing before, but its still not working. I found the ports I needed to open on verizon's website, and opened them in the port forwarding config on my router. This still didn't help. So, I tried setting the network extender device's IP as the DMZ IP on the router. This still did no good. I called Verizon back and got the tech to write up a report which he passed on to a "senior network tech" who I got a call back from a few hours ago. This guy told me that while an ASUS router isn't listed as a supported device, he's not really sure why its not working. He suggested restoring the firmware to stock ASUS firmware and trying again. I have a very hard time believing its DD-WRT doing this, since every other device is working just fine with it. But its also not the Network Extender, since it works just fine when connected directly to the modem. At this point I'm out of ideas, and the next step is to restore the stock firmware on my router, and then going to walmart and getting a linksys WRT-54G to try. Is there anything else I could try before going that drastic? Cliffs- -Network extender won't work behind router, works when connected directly to cable modem. -Extender goes nuts when allowed to pick its own DHCP address, I had to assign it a static IP. -Won't work when correct ports are forwarded to it -Won't work with a DMZ address.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Performance of Cluster Shared Volume file copy from SAN

    - by Sequenzia
    I am hoping someone can help me out with a strange issue. We are running a Microsoft Failover Cluster with Server 2008 R2 and an Equallogic PS4000 SAN. Our main configuration has 2 Dell Poweredge T710 Servers in the cluster. We have CSV and Quorm setup. The servers each have 10 Broadcom 1Gb NICs. Right now 4 of the NICS are on the iSCSI network for accessing the SAN. They use MPIO and the Dell HIT pack. We have 5 VMs running on each node and everything runs smooth. No noticeable performance issues or anything. From the SAN I can see the 4 iSCSI connections from each server to each volume (CSV and Quorm). Again, it seems to perform great. The problem I am running into is with backups. I have tried a few backup programs like backupchain and Veeam. The problem is both of them are very very slow to backup the VMs. For instance I have a 500GB (fixed disc) VHD that’s running on the cluster. It takes over 18 hours to backup that VHD and that’s with compression and depuping turned off which is supposed to be the fasted. We also have a separate server that is just for backups. It has a lot of directed attached storage. As part of the troubleshooting I decided to bring that server into the cluster as a node. It now has access to the CSV and can read from C:\clusterstorage\volume1 which is where our VHDs live. This backup server only has 2 NICs. 1 NIC is going to the iSCSI network and the other is just on the main network. It has Intel NICS in it without any sort of MPIO or teaming. So with the 3rd server now in the cluster I started doing some benchmarking. I have a test VHD that’s about 7GBs that’s stored in the CSV. I have tested file copying that VHD from all 3 servers to directed attached storage in the respective server. The 2 Dell servers that are the main nodes in the cluster (they house the VMs) are reading that file at about 20Mbs/Sec. Which at that rate is way to slow for the backups. The other server which only has 1 NIC to the SAN is reading at around 100Mbs/Sec. I spent a few hours on the phone with Dell today about this . We went through all kind of tests and he was pretty dumb founded. He really has no idea why that server with only 1 NIC is reading about 5 times as fast as the servers with 4 NICS and MPIO. We looked at the network utilization of the NICs while the file copy was going on. The servers with the 4 NICs had a small increase of activity during the file copy but they only went up to around 8-10% on all 4 NICs. The other server with the 1 NIC jumped up to over 80% during the file copy. I plan on doing some more testing after hours and calling Dell back tomorrow but I really am confused (and so is Dell’s support rep) why I cannot get faster file copy access to the CSV on those servers. Anyone have any input on this? Any feedback would be greatly appreciated. Thanks in advance.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Goodbye my beloved Nexus One, hello Windows Phone 7

    - by George Clingerman
    Last night my wife’s Nexus One finally bit the dust. You may not know but I’ve been nursing her Nexus One one along for quite a while after her screen shattered. I was able to replace it on my own (go me!) but little quirks have been popping up and the phone was quickly deteriorating. Lately it’s been the power button. Wifey would often have to press the power button several times to get her phone to turn on and last night it just wouldn’t wake up again. I took it apart and tried my best to see if I could somehow make it live once again but no luck this time. It was finally ready to retire. We looked at first for a replacement phone for her but she wasn’t really seeing anything she liked. So I decided to make the ultimate sacrifice and offer up my much loved Nexus One and I would then get a new Windows Phone 7 device. I love T-Mobile for my service so my choices were immediately limited to basically just a single phone. The HTC HD7. I read reviews and they were all over the board from people loving to people hating the phone but I decided, hey, why not, let’s take this plunge. And I did. I’ve only had the phone for about two days now so below is my list of first reaction pros/cons. These are basically things I’ve missed or things I’ve noticed that I really like about my new Windows Phone. Cons: * No Google Talk – I used this a LOT on my Nexus. I’ve found an application called “Flory” but it’s just an ok substitute, not the same as the full featured GTalk I had on my Nexus. * Seesmic is limited– I loved the way Seesmic worked on my Nexus. It was my mobile twitter client of choice. Everything about it worked really well. On Windows Phone 7 it’s just ok. I don’t get notification of new tweets, it’s several clicks to even see a new tweet. It’s definitely got some more development before it has the same features as it did on my Nexus. * Buttons don’t give great feedback – I’d read this on the reviews about the HTC HD7 and I’m finding it true myself. Pressing the buttons on the side of the phone and the power button on the top is finicky and I have to be looking at my phone to make sure I actually got them to press. * Web browsing is slow – I’m not sure what’s up with this, I’m connected to my wireless network at my house but it’s noticeably slower on my WP7 device than my Nexus. I even switched back to verify and it’s definitely true. Retrieving tweets, hitting up the XNA forums and just general web activities are all much slower on my WP7. I can’t think of any reason this would be true but it almost seems like it’s not using my wireless for everything.   Pros: * It’s pretty – the phone is really gorgeous. I loved the form of my Nexus One by the HTC HD7 is just as pretty, maybe even prettier! It’s got a nice large, bright screen. It feels good in my hand. And it even has a little kickstand to set the phone up for movie watching. Definitely a gorgeous phone. * LIVE integration – I lost a lot of nice integration with Google services but I gained a lot of integration with LIVE services that I also use. Now I can see when I get new GMail messages AND Hotmail messages. And having the Xbox LIVE integration is admittedly cool as well. * Tile notification rock – The Windows Phone 7 commercials are TRYING to get this message out but they’re doing a really poor job of this. Tile notifications really do save you from your phone. I have a whole little mini-informational dashboard at a glance. I unlock my phone and at a glace I can see new IMs, new mail messages, software updates etc. All just letting me know in the tiles I have arranged. That’s pretty cool. * The interface works really well – I feel super hip and cool swiping and sliding things around on my Windows Phone 7. Everything works that way and it’s great and fast and really good looking. I’m all about me feeling cool. * I’m gaming more – I had gotten a few games on my Nexus One but there really weren’t a lot of good developers flocking to the service. Just browsing through the Windows Phone 7 marketplace I’m already seeing a ton of games I want to try and buy. And I sat down and bet Pixel Man 0 just yesterday on my phone. I’m already gaming more than I did on my Nexus One. * Netflix integration is fantastic - It works just like it does on my Xbox 360 and I love having this feature on my phone. * It’s basically a Zune – I’ve been taking my Zune to work and listening to music off of that while I code. I no longer need to take it with me, now I just sync songs onto my phone and it’s my new Zune. I freaking love that. One less device to carry around.   All in all my cons have really little to do with the phone (just the buttons and the web browsing) and more to do with the applications needing to catch up a bit to what I’m used to. And the Pros are things that ARE phone specific so I’m seeing that as a good sign that I’m going to be very happy with my Windows Phone 7. So Wifey is happy having her Nexus One again, I’m happy with my new Windows Phone 7. Life is good. Now I just need to make a game to pay for it….

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • Backup, Migrate or Clone Failing CentOS 4 (LVM)

    - by Hegelworm
    I've been running a BlueQuartz CentOS 4 system (Nuonce.net distro) for a few years now and although the hard drive (Deskstar) has always been a bit noisy, on a few recent occasions I've heard it having trouble spinning up. Basically, I want to clone this drive to a similar sized one (80 Gig). I've spent many hours reading upon dd, dd_rescue, rsync, clonezilla and LVM mirroring yet the sheer number of options and nightmarish accounts has left me frozen - unable to make an informed decision as to how to start. I've made a few attempts. dd failed after about 2 hours, as, although the drives appeared to be identical on the surface (ATA Seagate Barracudas, Thai not Chinese), the destination drive is slightly smaller. My most recent attempt involved using a Debian CD to format the new drive and then rsync-ing everything over and editing the new drive's grub and fstab to reflect the changes. No joy here either as I hadn't chosen LVM when partitioning the destination drive and it wouldn't boot. As you can probably tell, I'm out of my depth here and a panic-invoking mixture of caution and frustration has prompted me to sign up here. The server itself, although not strictly a production environment, has a very specific installation of Festival, LAME and ffMpeg and provides the back-end for a Text-to-Speech jQuery plugin that I've built over the last 2 years. I'm also planning to rebuild the whole TTS system on Debian as the existing CentOS system still has PHP4 etc. For now though, I'd really like to just shift everything over to a new drive. As this is my first post, please feel free to lay any house rules on me that I might've overlooked; I've been hovering around StackOverflow for a while now but have only just signed up. Many thanks. Update: Thanks for your responses so far - it's much appreciated and makes me feel a little more confident when I can double-check things here. I had the idea of doing a fresh install of CentOS (from the original disk) on the new drive so the partitions and LVM were all set up correctly (after disconnecting my source drive to prevent painful mistakes). I then booted into rescue mode from the same CD, and, to avoid a conflicting label, changed the /boot partition's label using e2label to /bootnew. I then changed the VolGroup name using lvm vgrename from VolGroup00 to VolGroup001. I could then boot with both drives in. After mounting the new drive (via its VolGroup001 alias) into /newhd, I rsync-ed over everything I could to the new drive, using -avr switches and backslashes. Like mentioned here. I then disconnected my original source drive again, booted from the liveCD again, changed back the boot partition label from /bootnew to /boot using e2label and then renamed the VolGroup back to VolGroup00. I then rebooted and it went through the familiar start-up routine only to not find a host of files in proc, usr, lib, var etc. The boot did complete but there were lots of red 'FAILS'. I could log in with my existing creds, but the network was kaput, I couldn't startX (desktop GUI) and there were also a few (a lot) of error messages pertaining to iptables. Back to square one. I naively thought I'd nailed it. Shall I just buy a bigger hard drive and attempt the dd route? I've read that this can mess with LVM setups and there's the added risk of working on two unmounted drives at once with a low-level tool. Thanks again.

    Read the article

  • Hoster not fulfilling contract: how to get money back?

    - by plua
    For several years, we have as a small webdesign company rented a dedicated server at a large hosting provider. They had several support levels. When we signed up for this, we had very limited in-house knowledge about server maintenance, and were very worried about the security of our server. We therefore took one of the more expensive support packages. An important aspect in this were these claims: [PROVIDER] verifies the availability of the latest security updates and sends you a notification to see if you are interested to have them installed [PROVIDER] verifies the availability of the latest supported software updates and sends you a notification to see if you are interested to have them installed These items were clearly stated on their website as being part of the advantage of this package.; With not enough knowledge about installing and updating such software on a Linux server, we decided to go for this package. We paid a premium of $50 per month over the maintenance package that is next in line ($100 vs $50). Over the years, we have paid several thousand dollars for this service. Then came the moment that I learned more and more about server management. And I found out step by step that our server was horrendously outdated! We had an OS that was hardly updated, our anti-virus was not working because it needed certain more recent packages on the OS, and in general there were a whole bunch of security vulnerabilities and fixes that were lacking. Shocked, I wrote the provider. Turns out, they decided unilaterally that they would not send out any notifications to clients because clients would get too many e-mails. This is a quote from their explanation: [...] We have decided not to spam its clients with OS and security updates and only install them whenever asked by the client I was shocked! They had never mentioned that they would drop this service, and in fact the claims about updating their clients through e-mail was still on their website, after they apparently stopped doing this years ago! Upon finding this out, I requested they refund all that we have paid as a premium over the other package, and make it available as future credit with their own company. I thought this was a very reasonable request. However, they said they would only go back one year and provide credit for this one year. Mails went back and forth, but they were not willing to give credit for the whole period, which I felt I was entitled to. So ultimately I left the hosting company, and filed a complaint with the BBB a while ago. Now, I am not the kind of person who runs to a lawyer for any minor thing, but in this case I am really considering taking action. I have been paying for years for a service I did not receive (the premium package had a few other pluses, but we took it primarily for these two points, and I can prove that we did not use the other benefits). For our small company the hosting costs were a very large part of our budget, and I feel it is very unfair how this large provider just does not care about not fulfilling its obligations. So my question is: what action should I take? Is a lawyer the only next step, or are there other suggestions? And am I right here to claim this money, or are they right that there is some sort of statue of limitations on such claims? Any feedback is appreciated.

    Read the article

  • Poor home office network performance and cannot figure out where the issue is

    - by Jeff Willener
    This is the most bizarre issue. I have worked with small to mid size networks for quite a long time and can say I'm comfortable connecting hardware. Where you will start to lose me is with managed switches and firewalls. To start, let me describe my network (sigh, shouldn't but I MUST solve this). 1) Comcast Cable Internet 2) Motorola SURFboard eXtreme Cable Modem. a) Model: SB6120 b) DOCSIS 3.0 and 2.0 support c) IPv4 and IPv6 support 3-A) Cisco Small Business RV220W Wireless N Firewall a) Latest firmware b) Model: RV220W-A-K9-NA c) WAN Port to Modem (2) d) vlan 1: work e) vlan 2: everything else. 3-B) D-Link DIR-615 Draft 802.11 N Wireless Router a) Latest firmware b) WAN Port to Modem (2) 4) Servers connected directly to firewall a) If firewall 3-A, then vlan 1 b) CAT5e patch cables c) Dell PowerEdge 1400SC w/ 10/100 integrated NIC (Domain Controller, DNS, former DHCP) d) Dell PowerEdge 400SC w/ 10/100/1000 integrated NIC (VMWare Server) 4) Linksys EZXS88W unmanaged Workgroup 10/100 Switch a) If firewall 3-A, then vlan 2 b) 25' CAT5e patch cable to firewall (3-A or 3-B) c) Connects xBox 360, Blu-Ray player, PC at TV 5) Office equipment connected directly to firewall a) If firewall 3-A, then vlan 1 b) ~80' CAT6 or CAT5e patch cable to firewall (3-A or 3-B) c) Connects 1) Dell Latitude laptop 10/100/1000 2) Dell Inspiron laptop 10/100 3) Dell Workstation 10/100/1000 (Pristine host, VMWare Workstation 7.x with many bridged VM's) 4) Brother Laser Printer 10/100 5) Epson All-In-One Workforce 310 10/100 5-A) NetGear FS116 unmanaged 10/100 switch a) I've had this switch for a long time and never had issues. 5-B) NetGear GS108 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-C) Linksys SE2500 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-D) TP-Link TL-SG10008D unmanaged 10/100/1000 a) Bought new for this issue and still have. 6) VLan 1 Wireless Connections (on same subnet if 3-B) a) Any of those at 5c b) HP Laptop 7) VLan 2 Wireless Connection (on same subnet if 3-B) a) IPad, IPod b) Compaq Laptop c) Epson Wireless Printer Shew, without hosting a diagram I hope that paints a good picture. The Issue The breakdown here is at item 5. No matter what I do I cannot have a switch at 5 and have to run everything wireless regardless of router. Issues related to using a switch (point 5 above) SpeedTest is good. Poor throughput to other devices if can communicate at all. Usually cannot ping other devices even on the same switch although, when able, ping times are good. Eventual lose of connectivity and can "sometimes" be restored by unplugging everything for several days, not minutes or hours but we're talking a week if at all. Directly connect to computer gives good internet connection however throughput to other devices connected to firewall is at best horrible. Yet printing doesn't seem to be an issue as long as they are connected via wireless. I have to force the RV220W to 1000Mb on the respective port if using a Gig Switch Issues related to using wireless in place of a switch (point 5 above) Poor throughput to other devices if can communicate. SpeedTest is good. Bottom line Internet speeds are awesome. By the way, Comcast went WAY above and beyond to make sure it was not them. They rewired EVERYTHING which did solve internet drops. Computer to computer connections are garbage Cannot get switch at 5 to work, yet other at 4 has never had an issue. Direct connection, bypass switch, is good for DHCP and internet. DNS must be on server, not firewall. Cisco insists its my switches but as you can see I have used four and two different cables with the same result. My gut feeling is something is happening with routing. But I'm not smart enough to know that answer. I run a lot of VM's at 5-c-3, could that cause it? What's different compared to my previous house is I have introduced Gigabit hardware (firewall/switches/computers). Some of my computers might have IPv6 turned on if I haven't turned it off already. I'm truly at a loss and hope anyone has some crazy idea how to solve this. Bottom line, I need a switch in my office behind the firewall. I've changed everything. The real crux is I will find a working solution and, again, after days it will stop working. So this means I cannot isolate if its a computer since I have to use them. Oh and a solution is not throwing more money at this. I'm well into $1k already. Yah, lame.

    Read the article

  • Gratuitous CRLF in Subject: line - why is it there, and is it legal?

    - by MadHatter
    I'm running into a problem with a NAGIOS system sending emails to a popular email-to-SMS service. The email-to-SMS service takes emails with text in the Subject: line, and sends them on to the mobile number encoded in the To: field. So far so good. Sadly, sendmail (and postfix before it) seem to be inserting a gratuitous CRLF into the (necessarily long) Subject: line, and that's causing my SMS messages to be truncated at the CRLF if and only if the Subject: line contains one or more colons past the gratuitous CRLF. I am confident that the messages are being created correctly, but just to be sure, here's me creating a completely noddy test message to myself, with a long Subject: line: echo "foo" | mail -s "1234567 101234567 201234567 301234567 401234567 501234567 601234567 701234567 801234567 90123456789" [email protected] Note there's no extra colon in this Subject: line; all I'm doing here is showing that an extra CRLF is inserted on the wire. Here's the result of sudo ngrep -x port 25: 44 61 74 65 3a 20 46 72    69 2c 20 33 31 20 4d 61    Date: Fri, 31 Ma 79 20 32 30 31 33 20 31    30 3a 34 33 3a 35 35 20    y 2013 10:43:55 2b 30 31 30 30 0d 0a 54    6f 3a 20 72 65 61 70 65    +0100..To: reape 72 40 74 65 61 70 61 72    74 79 2e 6e 65 74 0d 0a    [email protected].. 53 75 62 6a 65 63 74 3a    20 31 32 33 34 35 36 37    Subject: 1234567 20 31 30 31 32 33 34 35    36 37 20 32 30 31 32 33     101234567 20123 34 35 36 37 20 33 30 31    32 33 34 35 36 37 20 34    4567 301234567 4 30 31 32 33 34 35 36 37    20 35 30 31 32 33 34 35    01234567 5012345 36 37 0d 0a 20 36 30 31    32 33 34 35 36 37 20 37    67.. 601234567 7 30 31 32 33 34 35 36 37    20 38 30 31 32 33 34 35    01234567 8012345 36 37 20 39 30 31 32 33    34 35 36 37 38 39 0d 0a    67 90123456789.. 55 73 65 72 2d 41 67 65    6e 74 3a 20 48 65 69 72    User-Agent: Heir 6c 6f 6f 6d 20 6d 61 69    6c 78 20 31 32 2e 34 20    loom mailx 12.4 37 2f 32 39 2f 30 38 0d    0a 4d 49 4d 45 2d 56 65    7/29/08..MIME-Ve 72 73 69 6f 6e 3a 20 31    2e 30 0d 0a 43 6f 6e 74    rsion: 1.0..Cont 65 6e 74 2d 54 79 70 65    3a 20 74 65 78 74 2f 70    ent-Type: text/p 6c 61 69 6e 3b 20 63 68    61 72 73 65 74 3d 75 73    lain; charset=us About half way down (marked in bold+italic), between the 501234567 and the 601234567 in the original Subject: header, you can see a CRLF being inserted (0x0d 0x0a, on the left-hand side hex dump, .. on the right-hand side plain text). The receiving MTA seems happy to post-process this, and when I look at the on-disc stored mail at the receiving end, I see only a LF (0x0a) in the Subject: line, and the line is parsed correctly and in its entirety by, eg, alpine. Nevertheless, the CRLF is there on the wire, and between me and the (excellent) email-to-SMS support people, we've established that these are the cause of the problem. So my question is: is it lawful for an MTA to insert a gratuitous CRLF on the wire? If it is, and I can prove it, then it's the email-to-SMS house's problem, because they are being intolerant. If it isn't, or it is but I can't prove it, then it becomes my problem, so an answer with references would be most useful. Edit: I can now come clean that the email-to-SMS service in question is kapow. Once this problem was explained to them, they got it, worked with me to develop and test a fix, and have deployed the fix. My long subject lines with colons in now get relayed correctly into SMSes. I don't normally trumpet individual companies, especially not on SF, but I thought it worthy of note that kapow Did The Right Thing. (Disclaimer: I have no connection with kapow except as a paying customer who's happy about the way they dealt with his problem.)

    Read the article

  • Oracle Fusion Applications: Changing the Game

    - by kellsey.ruppel(at)oracle.com
    Originally posted in the Oracle Profit Magazine, November 2010 Edition. When the order processing system red-flags a customer's credit status, the IT department doesn't get the customer's call. When a supplier misses a delivery date for a key automotive assembly, it's not the CIO who has to answer for the error. Knowledge workers (known in IT circles as "users") are on the front lines when an exception occurs in an established business process. They're also the ones who study sales trends to decide when to open a new store in an up-and-coming neighborhood, which products are most profitable, how employee skill sets are evolving, and which suppliers are most efficient. In short, knowledge workers are masters of business as unusual. Traditional enterprise resource planning (ERP) systems and other familiar enterprise applications excel at automating, managing, and executing standard business processes. These programs shine when everything goes as planned. Life gets even trickier when a traditional application needs to be extended with a new service or an extra step is added to a business process when new products are brought to market, divisions are merged, or companies are acquired. Monolithic applications often need the IT department to step in and make the necessary adjustments--incurring additional costs and delays. Until now. When Oracle unveiled the much-anticipated family of Oracle Fusion Applications at Oracle OpenWorld in September 2010, knowledge workers in particular had a lot to cheer about. Business users will soon have ready access to analytical information and collaboration tools in the context of what they are working on, so they can make better decisions when problems or opportunities arise. Additionally, the Oracle Fusion Applications platform will make it easy for business users to tweak processes, create new capabilities, and find information, often without the need for IT department assistance and while still following company guidelines. And IT leaders will be happy to hear about new deployment options, guided implementation and setup tools, and cost-saving management capabilities. Just as important, the underlying technologies in Oracle Fusion Applications will allow organizations to choose among their existing investments and next-generation enterprise applications so they can introduce innovations at a pace that makes the most business and financial sense. "Oracle Fusion Applications are architected so you don't have to do rip and replace," says Jim Hayes, managing director of the consulting firm Accenture. "That's very important for creating a business case that will get through the steering committee and be approved by the board. It shows you can drive value and make a difference in the near term." For these and other reasons, analysts and early adopters are calling Oracle Fusion Applications a game changer for enterprise customers. The differences become apparent in three key areas: the way we innovate, work, and adopt technology. Game Changer #1: New Standard for InnovationChange is a constant challenge for most businesses, whether the catalysts are market dynamics, new competition, or the ever-expanding regulatory environment. And, in an ongoing effort to differentiate, business leaders are constantly looking for new ways to do business, serve constituents, and bring new products and services to market. In addition, companies face significant costs to keep their applications up-to-date. For example, when a company adds new suppliers to a procurement system, the IT shop typically has to invest time, effort, and even consulting fees for custom integrations that allow various ERP systems to communicate with each other. Oracle Fusion Applications were built on Web services and a modular SOA foundation to ease customizations and integration activities among all applications--whether from Oracle or another vendor. Interfaces and updates written in ubiquitous Java, rather than a proprietary coding language, allow organizations to tap into existing in-house technical skills rather than seek expensive outside specialists. And with SOA, organizations can extend a feature set or integrate with other SOA environments by combining Web services such as "look up customer" into a new business process managed by the BPEL orchestration engine. Flexibility like this has long-term implications. "Because users capture these changes at a higher metadata layer, not in the application's code, changes and additions are protected even as new versions of Oracle Fusion Applications are released," says Steve Miranda, senior vice president of applications development at Oracle. "This is a much more sustainable approach because you don't incur costly customizations that prevent upgrades and other innovations." And changes are easier to make: if one change is made in the metadata, that change is automatically reflected throughout the application interface, business intelligence, business process, and business logic. Game Changer #2: New Standard for WorkBoosting productivity comes down to doing the basics right: running business processes more efficiently and managing exceptions more effectively, so users can accomplish more in the course of a day or spend more quality time with the most profitable customers. The fastest way to improve process efficiency is to reduce the number of steps it takes to execute common tasks, such as ordering office equipment from an internal procurement system. Oracle Fusion Applications will deliver a complete role-based user experience with business intelligence and collaboration capabilities provided in the context of the work at hand. "We created every Oracle Fusion Applications screen by asking 'What does the user need to know?' 'What does he or she need to do?' and 'Who do they need to work with to get the job done?'" Miranda explains. So when the sales department heads need new laptops, the self-service procurement screen will not only display a list of approved vendors and configurations, but also a running list of reviews by coworkers who recently purchased the various models. Embedded intelligence may also display prevailing delivery lead times based on actual order histories, not the generic shipping dates vendors may quote. The pervasive business intelligence serves many other business activities across all areas of the enterprise. For example, a manager considering whether to promote a direct report can see the person's employee profile, with a salary history, appraisal summaries, and a rundown of skills and training. This approach to business intelligence also has implications for supply chain management. "One of the challenges at Ingersoll Rand is lack of visibility in our supply chain," says Mike Macrie, global director of enterprise applications for global industrial firm Ingersoll Rand. "Oracle Fusion Applications are going to provide the embedded intelligence to give us that visibility and give us the ability to analyze those orders at any point in our supply chain." Oracle Fusion Applications will also create a "role-based user experience" that displays a work list of events that need attention, based on user job function. Role awareness guides users with daily lists of action items and exceptions. So a credit manager may see seven invoices with discounts that are about to expire or 12 suppliers that have been put on hold because credit memos are awaiting approval. Individualization extends to the search capabilities of Oracle Fusion Applications. The platform uses Web-style search screens powered by an Oracle enterprise search engine, with a security framework that filters search results so individuals will only see the internal information they're authorized to access. A further aid to productivity is Oracle Fusion Applications' integration with Web 2.0 collaboration and social networking resources for business environments. Hover-over text will reveal relevant contact information whenever the name of a person appears in an Oracle Fusion Application. Users can connect via an online chat, phone call, or instant message without leaving the main application, reducing the time required for an accounts payable staffer to resolve a mismatch between an invoiced charge and the service record, for example. Addresses of suppliers, customers, or partners will also initiate hover-over text to show contact details and Web-based maps. Finally, Oracle Fusion Applications will promote a new way of working with purpose-driven communities that can bring new efficiencies to everything from cultivating sales leads to managing new projects. As soon as a lead or project materializes, the applications will automatically gather relevant participants into an online community that shares member contact information, schedules, discussion forums, and Wiki pages. "Oracle Fusion Applications will allow us to take it to the next level with embedded Web 2.0 tools and the embedded analytics," says Steve Printz, CIO and vice president, supply chain management, at window-and-door manufacturer Pella. "[This] allows those employees today who are processing transactions to really contribute to the success of the company and become decision-makers." Game Changer #3: New Standard for Technology AdoptionAs IT becomes a dominant component of how businesses run and compete, organizations need to lower the cost of implementing applications and introducing new application features. In the past, rolling out new code often required creating a test bed system, moving beta code to a separate system for user feedback, and--once all the revisions were made--moving version one of the software onto production systems, where business users could finally get the needed new features. Oracle Fusion Applications will use a dedicated setup manager application to streamline this process. First, the setup manager will help scope out the project, querying users about their requirements. "From those questions and answers we determine the steps and the order of those steps that will enable that task," Miranda says. Next, system utilities will assign tasks to owners, track completion status, and monitor the overall status of a programming effort. Oracle Fusion Applications can then recommend Web services that allow users to migrate setup choices and steps across all the various deployments of the application. Those setup capabilities automate the migration from test systems to production systems, as well as between different business units that may be using the same application. "The self-service ability of the setup manager helps business users change setups with very little intervention from the IT team," says Ravi Kumar, vice president at IT services company Infosys. "That to me is a big difference from how we've viewed enterprise applications before." For additional flexibility, organizations will be able to adopt Oracle Fusion Applications modules in either of two modes: a single-instance alternative uses one database for all Oracle Fusion Applications, while a "pillar mode" creates separate databases to underpin each application. This means IT departments running any one of Oracle's applications or even third-party applications can plug Oracle Fusion Applications modules into their environment and see additional business value created on top of their existing systems. And Oracle Fusion Applications offer a hybrid approach to deployment. The applications are all software-as-a-service-ready, so customers can choose on-premises, public or private cloud, or a combination of these to suit their business needs. It's that combination of flexibility and a roadmap for the future that may be the biggest game changer of all. "The Oracle Fusion Applications architecture allows us to migrate our company at a pace that's consistent with our business strategy, whereas before we might have had to do it with a massive upgrade," says Macrie of Ingersoll Rand. "We're looking forward to that architecture to really give us more flexibility in how we migrate over time." For More InformationUser Input Key to the Success of Oracle Fusion ApplicationsTransforming Coexistence into Strategic ValueUnder the HoodOracle Fusion ApplicationsOracle Service-Oriented Architecture  

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

  • NINE Questions with Michelle Juett

    - by NINEQuestions
    Michelle Juett is one of the more interesting people I know, even though we’ve never met face to face. She’s part artist, part techie and all cool. We “met” via my good buddy George Clingerman and have plotting to take over the world, errr… I mean “collaborating” ever since. If you happen to live in the Seattle area, you can catch her and her work at Sakura Con on April 2-4, 2010 and various other gamer and art cons throughout the year. You can also find her on Twitter as @Shelldragon. Now that you know a little bit, I’ll let her tell you the rest of the story in these NINE Questions: 1. Where are you from? I was born in Clearwater, Florida. I like to tell people I'm from the Bermuda Triangle, it just makes explaining myself so much easier. My family moved to Washington when I was 5 and I've been in the Pacific Northwest ever since. We like to QQ about the rain but we really love the green trees and clean water. 2. What do you do? I fight evil by moonlight and win love by daylight.. or something like that.  I’ve been in quality assurance for games during the day since January 2008 and an artist for life. I currently work in QA for a really awesome game company in Bellevue.  At home, I work on personal digital art, making game assets as well as other random freelance projects as they pop up. 3. How did you get to where you are now? I'm still not where I want to be but I'm getting closer. The biggest piece of advice I can give is to work hard and never settle for the minimum required. I tend to overwork myself but I've never regretted it. You can want something really bad but if you aren't willing to work for it, then you can't expect it to just happen. I've always drawn and had an unhealthy love for video games that I was told I’d grow out of.  I knew I would not ‘grow out’ of games and that real adults make them and I could too. After I graduated, in searching for jobs, I discovered game testing. I figured this would be a good way to get my foot in the door and start networking. I’ve worked with consoles, websites and now, PC games.  I stuck with my journey, although it has been a rocky one, daylighting as a tester and moonlighting as an artist. I'm still on that journey but I wouldn't have it any other way. Test has given me a perspective that is difficult, if not impossible, to obtain any other way. It gives an unconditional respect for other hard working testers and an insight into creative problem solving. 4. So video game testing probably sounds WAY cooler than the reality. What's it like? What's a given day for you? Game testers don't get a lot of respect because of their stigmas and the fact most people don't actually know what we do.  People hear about the opening and closing disc trays all day. Many places do treat their testers like numbers. It all depends on where you work and how awesome your company is. I've had to deal with a lot of bad work situations to get to a really good one. QA exists to ensure the game is as flawless and enjoyable as it can be by the time it has to leave the nest and go out into the world. This includes everything obvious: “can I beat the level and save the princess?” to the more obscure: ‘What happens when I lose internet connection while trying to save right before falling into a pit to my death while holding the jump key then my cat pulls out my memory card and hides it in her litter box?” On the dev side, for developers, testers can be very scary people. Especially when the test team is not in house and you can’t see each other’s faces.  I've seen both sides. We don't mean to hurt your feelings. We really DO love you and want your game to be the best it can be! It can be some serious tough love. 5. You are also an accomplished artist. Got any major projects right now you'd like to talk about? LOL, I don't know if I’d say I'm an accomplished artist just yet. I’m still a long way from where I want to be. I figure that’s what makes you grow though: the desire to never stop improving. I like QA but I want to be a full time artist. I was lucky enough to register for a table at Sakura Con in the 11 second window that the tables sold out. As such, I’ll be selling my wares in the Artist Alley April 2-4th. Part of preparing for this is actually making the art to be sold there. Anime is a fun pass time but I don’t draw a whole lot of it so I’m making up for lost time. As I seem to enjoy burying myself in work, I’m an art lead for a secret project that’s so secret I might be killed tonight for even mentioning it. I also take on various freelance projects and do what I can to help out indie games. I discovered the XNA community a year and a half ago and developed a love for Indies when I was writing a weekly newsletter on XBLA news. I’m a little late to the party but I find myself in a unique position where I am an artist and also have technical skills in games. While not programmer myself, I have a lot of game sense and experience. I hope to make some awesome happen. Lastly, I have an ongoing web comic Shell’s Angels) that tends to get neglected when I get busy. I still love drawing comics and keep a little book with me to sketch down ideas as they pop into my head. I may pick it back up again as a larger project sometime in the future. 6. Can you talk about any of the other freelance projects you're doing or are you sworn to secrecy on those too? We wouldn't want a team of game developer ninjas to take you out or anything. All my projects are currently 2d. I have personal projects such as the ongoing comic as well as a graphic novel I've been picking at here and there. My main focus until April is Sakura Con, Sakura Con, Sakura Con.  I see it as a great way to get exposure and convention experience. I found out I love conventions a couple years ago and I want to get more involved in them. 7. As an artist, what is your weapon of choice? What do you use to get most of your stuff done? I am a Photoshop Hero and I have the hoodie to prove it. (http://www.pennyarcademerch.com/pah090011.html) I've dabbled in other paint programs but I always gravitate back to Photoshop. She is my one true love. I'd like to learn programs like Flash or Anime Studio when I get a bit more time because of their animation abilities. I've worked on frame by frame animation forever but I would love to learn 2d rigging. Still, nothing can compare to a simple sketchpad and a pencil. I always have one on me in case I come across or think of something interesting and can't get to a computer. If the Courier ever comes to exist it will be an ideal weapon for me. 8. You did some videos too, depicting the art creation process. What was the motivation behind those? The creative process is just as important as the final product, if not more so.  I've always loved watching speed paint videos and wanted to try it out myself. Turns out it's a lot of work and time but it's definitely fun to go back and rewatch them. Art isn't always the end result and is more often the process itself. 9. Got any interesting tattoos? Designed any for yourself or other people? Not yet, but not for lack of desire. I've toiled over what and where for years. Last year, I finally decided the back of my shoulders would be the place. Like anything permanent, I want it to have meaning. I thought of somehow incorporating games but I couldn't find something I felt would stand the test of time even with all the classic sprite games. I'm very picky so we'll see if I can get something solid decided. Come see me at Sakura Con April 2 -4!!!

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • CodePlex Daily Summary for Monday, December 27, 2010

    CodePlex Daily Summary for Monday, December 27, 2010Popular ReleasesRocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresVCC: Latest build, v2.1.31226.0: Automatic drop of latest buildFxCop Integrator for Visual Studio 2010: FxCop Integrtor 1.1.0: New FeatureSearch violation information FxCop Integrator provides the violation search feature. You can find out specific violation information by simple search expression.Analyze with FxCop project file FxCop Integrator supports code analysis with FxCop project file. You can customize code analysis behavior (e.g. analyze specifid types only, use specific rules only, and so on). ImprovementImproved the code analysis result list to show more information (added Proejct and File column). Change...Flickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).Mcopy API: McopyAPI v.1.0.0.2: changed display help "mcopyapi /?" and a few minor changedOPSM: OPSM v2.0: Updated version, 30%-40% faster than v1.0. Requires .NET Framework 2.0People's Note: People's Note 0.19: Added touch scrolling to the note view screen. To install: copy the appropriate CAB file onto your WM device and run it.HSS Core Framework: HSS Core v4.0.700.10: Release v4.0.700.10 December 25th, 2010 Upgrade Instructions from v4.0.700 to v4.0.700.10: (patch release) Uninstall v4.0.700 Install the new v4.0.700.10 Upgrade Instructions: (full upgrade from a version prior to v4.0.700) Uninstall your old version Update the Log Configuration, by changing the logging from Database to Machine Backup and then Truncate the hss logging tables Uninstall the HSSLOG Database using the HSSLOG Database install wizard Uninstall the HSS Core Framework Ins...NoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.NHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????MiniTwitter: 1.64: MiniTwitter 1.64 ???? ?? 1.63 ??? URL ??????????????VivoSocial: VivoSocial 7.4.0: Please see changes: http://support.vivoware.com/project/ChangeLog.aspx?PROJID=48Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...SSH.NET Library: 2010.12.23: This release includes some bug fixes and few new fetures. Fixes Allow to retrieve big directory structures ssh-dss algorithm is fixed Populate sftp file attributes New Features Support for passhrase when private key is used Support added for diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256 and diffie-hellman-group-exchange-sha1 key exchange algorithms Allow to provide multiple key files for authentication Add support for "keyboard-interactive" authentication method...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 2.3.0: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. Release notesThis will be the last release targeting ASP.NET MVC 2 and .NET 3.5. MvcSiteMapProvider 3.0.0 will be targeting ASP.NET MVC 3 and .NET 4 Web.config setting skipAssemblyScanOn has been deprecated in favor of excludeAssembliesForScan and includeAssembliesForScan ISiteMapNodeUrlResolver is now completely responsible for generating th...Media Companion: Media Companion 3.400: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. A manual is included to get you startedMulticore Task Framework: MTF 1.0.1: Release 1.0.1 of Multicore Task Framework.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 7: 1. added script save/load in user query window 2. fixed problem with connection dialog when choosing windows auth but still ask for user name 3. auto open user table when double click one table node 4. improved alert message, added log only methodEnhSim: EnhSim 2.2.6 ALPHA: 2.2.6 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Fixing up some r...New ProjectsActiveDirectory Object Model Library: The ActiveDirectory Object Model Library, which can operate the AD easily.Allspark: Allspark is an BI focused sharing space.Calendar Control for WP7: Calendar Control for Wp7 (Windows Phone 7). I needed a calendar control for my WP7 app and since WP7 did not have any and I couldn't find anything on the internet I decided to write my own. Please feel free to use it in your project and share if you have improved on it.CryptoPad: CryptoPad is a simple notepad like application that works with encrypted files. Users can open,edit and save encrypted files using CrypoPad without andy other application, everythning is easy and transparent.Entity Visualizer: Entity Framework Debugger VisualizerHTML-IDEx: HTML-IDEx is meant to be an open-source lightweight WYSIWYG HTML editor in which the user can see what they are doing in realtime. Comparable to my other project, Brandons HTML IDE, I'm hoping this to be a huge success.IL Inject: Aspect-oriented programming based on the injection of IL instruction in the methods of an assembly that is marked by attributesIntegração no Trabalho: Este projeto será desenvolvido para ser possível a integração entre funcionarios de uma empresa. O programa funcionará em rede e através dele será possível cadastrar projetos, atividades e seus colegas poderão comentar, criticar e dar dicas nos outros projetos .Knexsys Project: Knexsys, also know as KKD (Knexsys Knowledge Discovery), is a research program that aims to study the capabilities of SQL Server and the .NET framework to implement a rule production engine to mine real-time data. It's developed in C#.mysvn: This project host including some of my own projects. I am interested in WPF and RIA. Furthermore, I want to learn more about C, C++, Java, PHP, Python.OpenTwitter: Twitter clone made with C# and LINQ. this is only a webservice, there is no client for it, the main idea behind opentwitter is to provide a framework that you can use with your own clients ( mobile devices, web pages, etc ) and data provider ( xml, sql server, oracle, etc ).OPSM: OPSM Miner & information projectOutlook UI Tools: Changes to the Outlook UI to make it more usable: * Allow reply to address a recipient instead of the sender * Keep meeting reminders from popping up when overdue many daysSimpleUploadTo: Simple UploadToTFS 4 FPSE: Allow the ability for FrontPage Server Extensions to use TFS as a source control repository.TFS CheckInNotifier: Team Foundation Server 2010 Check-in event notifier desktop application.Universe.WCF.Behaviors: Universe.WCF.Behaviors provides behaviors: - Easy migration from Remoting - Transparent delivery - Traffic Statistics - WCF Streaming Adaptor for BinaryWriter and TextWriter - etc vebshop2: vebshop2Windows Weibo all in one for Sina Sohu and QQ: Windows Weibo all in one for Sina Sohu and QQ.

    Read the article

  • asp.net image aspect ratio help

    - by StealthRT
    Hey all, i am in need of some help with keeping an image aspect ratio in check. This is the aspx code that i have to resize and upload an image the user selects. <%@ Page Trace="False" Language="vb" aspcompat="false" debug="true" validateRequest="false"%> <%@ Import Namespace=System.Drawing %> <%@ Import Namespace=System.Drawing.Imaging %> <%@ Import Namespace=System %> <%@ Import Namespace=System.Web %> <SCRIPT LANGUAGE="VBScript" runat="server"> const Lx = 500 ' max width for thumbnails const Ly = 60 ' max height for thumbnails const upload_dir = "/uptest/" ' directory to upload file const upload_original = "sample" ' filename to save original as (suffix added by script) const upload_thumb = "thumb" ' filename to save thumbnail as (suffix added by script) const upload_max_size = 512 ' max size of the upload (KB) note: this doesn't override any server upload limits dim fileExt ' used to store the file extension (saves finding it mulitple times) dim newWidth, newHeight as integer ' new width/height for the thumbnail dim l2 ' temp variable used when calculating new size dim fileFld as HTTPPostedFile ' used to grab the file upload from the form Dim originalimg As System.Drawing.Image ' used to hold the original image dim msg ' display results dim upload_ok as boolean ' did the upload work ? </script> <% randomize() ' used to help the cache-busting on the preview images upload_ok = false if lcase(Request.ServerVariables("REQUEST_METHOD"))="post" then fileFld = request.files(0) ' get the first file uploaded from the form (note:- you can use this to itterate through more than one image) if fileFld.ContentLength > upload_max_size * 1024 then msg = "Sorry, the image must be less than " & upload_max_size & "Kb" else try originalImg = System.Drawing.Image.FromStream(fileFld.InputStream) ' work out the width/height for the thumbnail. Preserve aspect ratio and honour max width/height ' Note: if the original is smaller than the thumbnail size it will be scaled up If (originalImg.Width/Lx) > (originalImg.Width/Ly) Then L2 = originalImg.Width newWidth = Lx newHeight = originalImg.Height * (Lx / L2) if newHeight > Ly then newWidth = newWidth * (Ly / newHeight) newHeight = Ly end if Else L2 = originalImg.Height newHeight = Ly newWidth = originalImg.Width * (Ly / L2) if newWidth > Lx then newHeight = newHeight * (Lx / newWidth) newWidth = Lx end if End If Dim thumb As New Bitmap(newWidth, newHeight) 'Create a graphics object Dim gr_dest As Graphics = Graphics.FromImage(thumb) ' just in case it's a transparent GIF force the bg to white dim sb = new SolidBrush(System.Drawing.Color.White) gr_dest.FillRectangle(sb, 0, 0, thumb.Width, thumb.Height) 'Re-draw the image to the specified height and width gr_dest.DrawImage(originalImg, 0, 0, thumb.Width, thumb.Height) try fileExt = System.IO.Path.GetExtension(fileFld.FileName).ToLower() originalImg.save(Server.MapPath(upload_dir & upload_original & fileExt), originalImg.rawformat) thumb.save(Server.MapPath(upload_dir & upload_thumb & fileExt), originalImg.rawformat) msg = "Uploaded " & fileFld.FileName & " to " & Server.MapPath(upload_dir & upload_original & fileExt) upload_ok = true catch msg = "Sorry, there was a problem saving the image." end try ' Housekeeping for the generated thumbnail if not thumb is nothing then thumb.Dispose() thumb = nothing end if catch msg = "Sorry, that was not an image we could process." end try end if ' House Keeping ! if not originalImg is nothing then originalImg.Dispose() originalImg = nothing end if end if %> What i am looking for is a way to just have it go by the height of what i set it: const Ly = 60 ' max height for thumbnails And have the code for the width just be whatever. So if i had an image... say 600 x 120 (w h) and i used photoshop to change just the height, it would keep it in ratio and have it 300 x 60 (w x h). Thats what i am looking to do with this code here. However, i can not think of a way to do this (or to just leave a wildcard for the width setting. Any help would be great :o) David

    Read the article

  • CodePlex Daily Summary for Saturday, May 12, 2012

    CodePlex Daily Summary for Saturday, May 12, 2012Popular ReleasesKanboxAPI: KanboxAPI beta: ????? Token Info List DownloadMedia Companion: Media Companion 3.502b: It has been a slow week, but this release addresses a couple of recent bugs: Movies Multi-part Movies - Existing .nfo files that differed in name from the first part, were missed and scraped again. Trailers - MC attempted to scrape info for existing trailers. TV Shows Show Scraping - shows available only in the non-default language would not show up in the main browser. The correct language can now be selected using the TV Show Selector for a single show. General Will no longer prompt for ...NewLife XCode ??????: XCode v8.5.2012.0508、XCoder v4.7.2012.0320: X????: 1,????For .Net 4.0?? XCoder????: 1,???????,????X????,?????? XCode????: 1,Insert/Update/Delete???????????????,???SQL???? 2,IEntityOperate?????? 3,????????IEntityTree 4,????????????????? 5,?????????? 6,??????????????dycom: v1.0: DYCom ????????:Silverlight, Windows phone 7.5.NETMF_for_STM32: Beta 1 Release: First public beta release.Google Book Downloader: Google Books Downloader Lite 1.0: Google Books Downloader Lite 1.0Python Tools for Visual Studio: 1.5 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports Cpython, IronPython, Jython and Pypy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0 RC1 Refresh 1: JayData 1.0.0 RC1 Refresh 1 JayData is a unified data access API to webSQL, indexedDB, OData, Facebook and YQL. Overview The major feature of this release is related to OData provider, FunctionImport is now generally supported. Now you can consume OData service operations (WebMethods). We extended the JaySvcUtil to generate the necessary metadata. We included many fixes, such as the Visual Studio 2010 IntelliSense optimalization (RC1 was optimized only to VS11). It's recommended to upgrade...AD Gallery: AD Gallery 1.2.7: NewsFixed a bug which caused the current thumbnail not to be highlighted Added a hook to take complete control over how descriptions are handled, take a look under Documentation for more info Added removeAllImages()51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.4.8: One Click Install from NuGet Data ChangesIncludes 42 new browser properties in both the Lite and Premium data sets. Premium Data includes many new devices including Nokia Lumia 900, BlackBerry 9220 and HTC One, the Samsung Galaxy Tab 2 range and Samsung Galaxy S III. Lite data includes devices released in January 2012. Changes to Version 2.1.4.81. The IsFirstTime method of the RedirectModule will now return the same value when called multiple times for the same request. This was prevent...Mugen Injection: Mugen Injection ver 2.2 (WinRT supported): Added NamedParameterAttribute, OptionalParameterAttribute. Added behaviors ICycleDependencyBehavior, IResolveUnregisteredTypeBehavior. Added WinRT support. Added support for NET 4.5. Added support for MVC 4.NShape - .Net Diagramming Framework for Industrial Applications: NShape 2.0.1: Changes in 2.0.1:Bugfixes: IRepository.Insert(Shape shape) and IRepository.Insert(IEnumerable<Shape> shapes) no longer insert shape connections. Several context menu items did display although the required permission was not granted Display did not reset the visible and active layers when changing the diagram NullReferenceException when pressing Del key and no shape was selected Changed Behavior: LayerCollection.Find("") no longer throws an exception. Improvements: Display does not rese...AcDown????? - Anime&Comic Downloader: AcDown????? v3.11.6: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...sb0t: sb0t 4.64: New commands added: #scribble <url> #adminscribble on #adminscribble offDocument.Editor: 2012.4: Whats new for Document.Editor 2012.4: Improved Template support Improved Options Dialog Minor Bug Fix's, improvements and speed upsJson.NET: Json.NET 4.5 Release 5: New feature - Added ItemIsReference, ItemReferenceLoopHandling, ItemTypeNameHandling, ItemConverterType to JsonPropertyAttribute New feature - Added ItemRequired to JsonObjectAttribute New feature - Added Path to JsonWriterException Change - Improved deserializer call stack memory usage Change - Moved the PDB files out of the NuGet package into a symbols package Fix - Fixed infinite loop from an input error when reading an array and error handling is enabled Fix - Fixed base objec...BlackJumboDog: Ver5.6.1: 2012.05.07 Ver5.6.1 (1)????????????????(Ver5.6.0??)??? (2)HTTP?????SSL????????????(Ver5.6.0??)??? (3)HTTP?????2G??????????????????????????? (4)HTP???? ?????????ExtAspNet: ExtAspNet v3.1.5: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-05-06 v3.1.5 -????????:grid/grid_twogrid.aspx。 +?...SharpDevelop: SharpDevelop 4.2: Please see http://community.sharpdevelop.net/forums/t/15772.aspx for the release announcement.Desktop Google Reader: 1.4.4: Taskbar icon overlay (number of unread items) can now be switched off in preferences (Windows Vista / 7 only) Maximize button now can be toggled to be fullscreen (as befor) or only normal maximize (taskbar stays visible) in preferences List of feeds is now sorted by alphabetNew Projects3D Scene Editor: A generic 3d level editor built using XNA to speed up designing and building game levels.Attribute Based Cache using Unity Interception: Unity interception handler attribute for Caching which allows to apply boiler plate caching pattern to classes, and class members directly, without configuring them in the application configuration file. Configure your choice of Cache Provider (ObjectCache, Azure included) in the Unity IoC Container and apply the attribute to the method which you want to cache AutoCompleteBox for WinRT: None yetBAC2 Bachelor's Thesis Source Code: The source code of my Bachelor's ThesisBAMabase: It's a bamabase.BBSProject: BBSProjectBrain 2 - Game Engine: Brain 2 is a Game Engine that runs on multiple platforms.cobra-winldtp: Cobra - Windows version of Linux Desktop Testing Project (WinLDTP) - http://ldtp.freedesktop.org LDTP is a GUI test automation tool works on both Windows and Linux platform Windows GUI test automation tool written in C# and test scripts can be written in Python for now. Ruby API will be added soon.CS322: C# Programski jezikDoxBotPlugin: The DoxBotPlugin is a Plug-In for Ice-Chat9 that allows a user to use Icechat9 as both a normal IRC client and to switch on bot mode in certain channels to have DoxBotPlugin act on their behalf.dycom: DYCom??(DY Communication)???????????,?????????????????.????????????????. ??????????????????????,?????????????????。Eyes Protector: PL: Program pomagajacy w ochronie oczu przed przemeczeniem zwiazanym ze zbyt dluga praca przy komputerze. EN: ---kshell: ????Linux??????Lumia: This is Lumia project.MacroDoc: MacroDoc is an engine written in C# for composing documents from reusable pieces of structure and content.Mssql: This is Sql Server project.NETMF_for_STM32: This is the Codeplex project for NETMF for STM32 (F4 Edition). Ocular - a free, open source WYSIWYG editor for HTML: Ocular is a free C# WYSIWYG HTML editor, similar to Adobe Dreamweaver. We are always looking for contributors, so please help us!PeopleCredit: Prototype web service to maintain all employee credits.Project Server workflow: This workflow creates the project site for Basic project plan EPT when workflow task is approved.(This is the correction done over the Branching workflow provided with Project Server 2010 SDK). Workflow task is created using PSWApprovalTask Content type in Project Server Workflow Task List. Quick Reminder: PL: Program pomagajacy zapisac szybkie przypomnienia podczas pracy przy komputerze. EN: ---Random Projects: My random projects...Recommendation Engine Demo: How does the Amazon recommendation works? This is about visualizing the item to item collaborations filtering mechanism using a item-to-item matrix table. The item-to-item matrix, the vectors and the calculated data values are displayed. There are n different items and the item recommendation can display up to m items. There are implemented different item-to-item neighborhood functions. A simple max count of seen neighbor items, the Cosine Similarity and the Jaccard Index. A t...SaveSeaTurtle: Sea TurtleSharpGpx: SharpGpx implements an object model for reading and writing GPX (GPS eXchange Format).SlimDo: SlimDo is a scripting language coded in C#Spring: This is Spring.Net projectSQL Server Quick Tools Pack: SQL Server Quick Tools Pack for your sql server SuLD framework: Supported Link Discovery framework (SuLD) is a tool to discover links to multiple Linked Data datasets. Finding is supported by various features like synonym module or autocomplete.TQuery.Net: .Net??????WAAP - World of warcraft Auction house Analysis Project: A project in which we try to analyse prices and auctioneers on the various World of Warcraft Auction housesxhttp.net: The Xhttp.Net framework is a dotnet implementation of the Extended Hypertext Transfer Protocol (http://www.xhttp.org), with simple service integration, full arguments support including Base64 and DateTime, single and multiple asynchronous requests, data streaming, remote API creation from XHTTP service schemas, and a runtime plugin architecture.

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73  | Next Page >