Search Results

Search found 10789 results on 432 pages for 'experience'.

Page 91/432 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Tools required for a Web Development Project..

    - by RBA
    Hi, I wanted to design a project in linux which could contain programming languages(C, perl, PHP, HTML, XML etc) basically a web based project. Why i have chosen to build on Linux is because it is Open Source, and lot many things can be automated through scripting languages, which in windows i don't know. So, i have installed linux on a virtual machine(Host-Windows 2007 & Guest Linux CentOS), CentOS(command line interface). Since i am a beginner, so I want to know what all tools can be used to facilitate and ease my development process. Some which i know are listed below, and request you to please share your experience on this. 1) Using Putty so that can access the Linux machine from anywhere within the network. 2) Since i want to develop on Linux, but want to use windows as developing platform. So have downloaded Eclipse Editor (C/PHP) on windows. But want to know how can i access linux files from here?? 3) Installed Samba, and still trying to figure out how can i access linux files remotely on Windows. 4) Please share your experience, as how can i ease my development process. and what all tools i can use..?? Please let me know if you need any other clarification..

    Read the article

  • Is reliability reputation of mechanical keyboards overblown?

    - by Rarst
    A while back I worked up to finally buying mechanical keyboard (~$100 range, "black" switches) and was initially quite content with purchase. However just outside first year (read it - as soon as warranty expired) it started to develop repeat issues (press once, get chain of letter repeated) on multiple keys. It doesn't react to generic cleaning (up to compressed air) and searching Internet shows noticeable amount of people with similar-to-identical issues, spanning years. This makes me severely hesitant to buy another mechanical keyboard, considering: every other keyboard I ever owned, including ultra-cheap crap managed to last longer than that typing experience is nice, but not lifechanging-fan-forever nice for me my choice of mechanical keyboards is severely limited not many brands represented in local market and primarily crazy looking gamer models russian (not to mention russian and ukrainian if possible) layout excludes international ordering price tag for a meek year of use I got our of it is plain demoralizing It is obvious mechanical keyboards have their fans, but shopping around for "best fit" or getting into multiple hundreds price tags is probably not something I am highly interested in. Considering my constraints and bad experience with reliability, is it practical for me to sink more money into buying mechanical keyboard(s) again? In other words - manufacturers are beaming about how crazy reliable mechanical keyboards are. Are active long time users of such keyboards confidently of same opinion?

    Read the article

  • Terminal emulation has stopped working. Garbage escape chars

    - by oligofren
    To enable me to do some remote administration of our servers I started using a terminal emulation program called TouchTerm Pro on my iPhone. While not the smoothest experience, it has allowed me to leave my computer behind when going out of town, which makes the slightly painful experience worthwhile. As of late, the app unfortunately no longer works. Pressing up and down keys after logging on via ssh gives me garbage like ^[[A and ^[[B. Combinations with Ctrl - like you can see in the video - no longer works either. Writing full command lines and executing by the enter key works though. Being able to search my bash history was the difference between a usable app and endless frustration, so getting it to work is essential. The app has (of course) met its end of life, not getting updated anymore. I am not quite sure, which side (client or server) that has to be "fixed"/hacked to make the control sequences work again. But is there something I can do to make it work as intended? You can see a video of TouchTerm in operation here.

    Read the article

  • OpenVPN-based VPN server on same system it's "protecting": feasible?

    - by Johnny Utahh
    Scenario: hosted machine (typically a VPS) serving wiki, svn, git, forums, email lists (eg: GNU mailman), Bugzilla (etc) privately to < 20 people. People not on team not allowed access. Seeking VPN-restricted access to said server. Have good user experience with OpenVPN-based servers/clients, but have yet to server-admin such systems. Otherwise, experienced Linux sysadmin. Target system: Ubuntu, probably 12.04. Seeking to put an OpenVPN process on above server to "protect" all the above-mentioned services, enabling only OpenVPN-authorized clients/processes to access above services. (Can easily acquire additional IP address(es) as needed for this setup.) Option: if absolutely needed, can employ an additional, dedicated, "VPN server" VPS simply to be my VPN server "front end." But prefer to have all server processes (VPN server plus other server apps) all running on same machine, if possible. Will consider further if dedicated-VPN-machine setup enables 1. easier installation/administration, 2. better/easier end-user experience, and/or 3. makes system significantly more secure. Any of above feasible? The main intention: create a VPN from purely-hosted resources, and not spend all the effort to make a non-VPN, secure site--which typically means "SSL wrapping" + all the continual webserver-application-update management. Let the VPN server deal with access security, and spend list time pushing said security "down" in the other apps/Apache.

    Read the article

  • Choosing gateway router/firewall for small datacenter network [closed]

    - by rvs
    I'm choosing a gateway router/firewall for small internal network for medium-sized web service. Currently there are 5 servers in internal network, up to 50 http(s) requests/second, up to 1000 simultaneous connections, uplink is 100 Mbit. So, network is relatively small and not very busy and we don't like to buy some pricey monster like cisco or jupiper for this site. Instead we'd like to buy two affordable devices (one for spare), which can handle our workload now and some time in future (it might be up to 2x more in 1 year). I had some experience with Sonicwall NSA, but it seems to be too complex for this site (we don't need most of its features) and even too pricey when buying two of them. So, after some research I've come up with following options: Netgear Prosecure UTM Series (probably UTM25) Zyxel ZyWall Series (USG100 or USG200) Sonicwall TZ 210 Is this a good idea? All of the above seems to be more office products, not datacenter ones. Or we should stick with Sonicwall NSA? Does anyone have any hands-on experience with this models? Maybe some other advices? Thanks.

    Read the article

  • Diagnose remote desktop freezes in Windows 7 when no BSOD?

    - by Paul Smith
    Okay, I'm getting no joy from Asus or Microsoft on this, so hoping for some clues on how to narrow down the cause. I have very frequent OS freezes, always & only when running Remote Destkop Client (mstsc) in Windows 7 x64. I never have a bluescreen, and there is never a minidump. The display & input just freezes -- no keyboard, no mouse, and sound will just continue the last wavelength if any. So far, I can't find a way to trap the hang given that there's no bluescreen; advanced startup & recovery settings for system failure are "Write an event" checked, "Automatically restart" checked, and "Kernel memory dump". I've updated to the lasted BIOS, and tried a few different graphics drivers, both generic & ATI. I've also tried disabling Aero, and everything about the remote desktop experience (incrementally unchecked every box in the mstsc - options - experience tab), even disabled/unplugged external monitor to make sure it wasn't a dual-monitor issue. My specs are: Asus G73jh notebook 8GB RAM ATI Mobility Radeon HD 5800 Series graphics (recently tried driver versions 8.791.0.0, 8.801.0.0) American Megatrends G73jh.211 BIOS (7/27/2010) Windows 7 Home Premium x64 Windows Memory Diagnostic passed all of the following at least 3 times with no errors: MATS+ INVC LRAND Stride6 WMATS+ WINVC This notebook is better than most at removing heat (laudable vent design), so I'm not inclined to suspect thermal causes (especially since running 1080p video for hours has never caused a freeze, but mstsc does, reliably, within 5 minutes to an hour). This did seem to start happening after a Windows Update, but I've since reverted every patch applied since a week before the first occurrence, with no joy. (And I'd only had the PC for a couple weeks before that, so it could have been chance + less actual time spent remoting at the beginning.) I'm at my wits end, and I bought this laptop primarily as a remote terminal client (go figure, right?) Any ideas on how to identify the cause of this? Thanks!

    Read the article

  • What Media Extender / Centre Set up should I use?

    - by Bryn Hird
    I have installed cat6 throughout the house which I use for telephony and network. In my cellar I have a NAS Server, gigabit switch and I want to install a Media Centre to stream my video's, music, photo's and live TV (coax from the aerial to the cellar) over the cat6. Yeah I know I can get stuff on the internet but shared experience of watching TV as a family as it happens is a big plus for live TV. I'm aiming for 1080p. I want different users to be able to watch different channels. Max users = 4. I've played a little with Windows Media Centre, works fine with live TV. Likewise I have XBMC up and running with live TV. The issue I have is what do I put near the TV. I'd like a consistent user interface (grandma and the the other technophobes in the house are continually pestering me on how to use different TVs, change channel, inputs etc.) so a key part of this for me is to make the user experience the same and simple i.e. no keyboards / PCs hanging around the TV. I've just bought a Linksys DMA 2200 to test the Windows Media Centre, but obviously off eBay as they're a dying breed. And with Windows Media Centre removed from Microsoft plans such devices will get rarer. And as for 1080p, think I can forget it with that set up. I have tested XBOX 360, also works but ditto on Microsoft plans for WMC. I was thinking of a WD Live TV to test the XMBC setup. Now to the question. Any advice on Media Centre / Extender setups that will do the job as above and have some degree of futureproofing (building my own with my Raspberry PI is a last resort). I'd like to understand the standards involved in the futureproofing if anyone knows (DNLA, RVU etc.).

    Read the article

  • Should I use nginx exclusively, or have it as a proxy to Tomcat (performance related)?

    - by Kevin
    I've planned to create a website that'll be pretty heavy on dynamic content, and want to know what would be the wisest choice for part of my webstack. Right now I'm trying to decide whether I should develop upon nginx, using PHP to deliver the dynamic content, or use nginx as a proxy to Tomcat and use servlets to deliver the dynamic content. I have a good amount of experience with Java, JSP, and servlets, so that's a plus right off the bat. Also, since it is a compiled language, it will execute faster than PHP (it is implied here that Java is around 37x faster than PHP) , and will create the web pages faster. I have no experience with PHP, however i'm under the impression that it is easy to pick up. It's slower than Java, but since the client will only be communicating with nginx, I'm thinking that serving the dynamically created web pages to the client will be faster this way. Considering these things, i'd like to know: Are my assumptions correct? Where does the bottleneck occur: creating pages or serving them back to the client? Will proxying Tomcat with nginx give me any of nginx performance benefits if I'm going to be using Tomcat to generate the dynamic content (keeping in mind my site is going to be heavy in this aspect)? I don't mind learning PHP if, in the end, its going to give me the best performance. I just want to know what would be the best choice from that standpoint.

    Read the article

  • MongoDB and GrifFS. What are the best storage options in the range of 1 TB?

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities?

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • How should an experienced Windows SysAdmin learn Linux? [closed]

    - by Systemspoet
    I have a new hire starting in a few weeks who is an experienced Windows SysAdmin. I think he's fairly senior on the Windows side, with a pretty deep AD understanding and experience with Exchange 2007, 2010, and exchange migrations. He's done a little PowerShell but I suspect more of the "run this command to do this" variety then "write a script to do this" sort. However, we are a mixed shop and (he knows this) I expect him to become a reasonably competent Linux SysAdmin over time. I'm looking for good starting points to bring him along. I have over ten years of Linux/UNIX experience, so it all sort of seems intuitive to me, but I've been thinking about the toolkit you actually need to be productive in the Linux CLI world. Just to be able to use the machines at all, off the top of my head... vi Basic CLI stuff -- move around, rename files, copy files, tar, gzip, changing passwords, finding relevant manpages, keep track of where you are, find things in your history, etc, etc. More advanced things that I take for granted but are actually pretty hard -- doing things with 'find', extracting relevant text via 'awk' and/or 'cut', knowing when to use 'grep' and when to use 'grep -e' or 'egrep'. Distribution specific stuff... compiling software, rpm, yum, apt-get, you name it. This all seems pretty basic to me, but when I think back to 1995 when I was first learning my way, some of those things took me years to master. So my question is -- where should I send him to pick up those skills? I'm not just thinking of classes, but rather also websites and books? Where do you all suggest as a starting point for picking up Linux skills?

    Read the article

  • Deployment and monitoring tools for java/tomcat/linux environment

    - by Ran
    I'm a developer for many years, but don't have tons of experience in ops, so apology if this is a newbe question. In my company we run a web service written in Java mainly based on a Tomcat web server. We have two datacenters with about 10 hosts each. Hosts are of several types: Dababase, Tomcats, some offline java processes, memcached servers. All hosts are Linux CentOS Up until now, when releasing a new version to production we've been using a set of inhouse shell script that copy jars/wars and restart the tomcats. The company has gotten bigger so it has become more and more difficult operating all this and taking code from development, through QA, staging and to production. A typical release many times involves human errors that cost us precious uptime. Sometimes we need to revert to last known good and this isn't easy to say the least... We're looking for a tool, a framework, a solution that would provide the following: Supports the given list of technology (java, tomcat, linux etc) Provides easy deployment through different stages, including QA and production Provides configuration management. E.g. setting server properties (what's the connection URL of each host etc), server.xml or context configuration etc Monitoring. If we can get monitoring in the same package, that'll be nice. If not, then yet another tool we can use to monitor our servers. Preferably, open source with tons of documentation ;) Can anyone share their experience? Suggest a few tools? Thanks!

    Read the article

  • Win7 taskbar freezes on startup for about 1-2 mins

    - by Mike
    Running Win7 64-bit for about 4 months now. Never had this problem, didn't install anything new recently. When I boot up I can't do anything in the taskbar, it's frozen for about 1-2 minutes then everything is normal. I can right click on my desktop and move my mouse around. This randomly just started happening a couple days ago after a reboot. I have a 3.2ghz quad, SSD, 4 gig ram, etc. and it usually starts up quickly. After some troubleshooting (including running antivirus and Anti-Malware), it doesn't appear to be software related, but appears to be services related. I can boot up in safe mode and safe mode with networking just fine. I can also boot up normally with all my regular software loading at startup, BUT with all my services turned off. Now the odd part. When I run msconfig to disable all the services at startup and go through ticking them on 5-10 at a time or so and booting up it seems to be somewhat random. Ticking everything on from "Application Experience" halfway down to about "Quality Windows Audio Video Experience" and I can boot without the 1-2 min. freeze. Then I start ticking the stuff below that from a couple of Remote Accesses to Smart Card and Task Scheduler, etc. But the weird part is sometimes it will freeze sometimes it won't. I can't narrow it down. Then if it freezes, I'll boot up in safe mode and turn the ones I just turned on back off and I'll reboot normally but it will freeze again. Which makes no sense because that configuration just worked without freezing just before. I got frustrated enough that I backed up and wiped my hard drive (formatted and everything) and reinstalled Win7 but when I booted up, the freeze happened again. Any ideas? Thanks in advance.

    Read the article

  • How to configure SVN server for my own project

    - by user1729952
    I work with a team on an Android project using Eclipse IDE, we need to use a version control and we need to access the repo remotely, I have no experience using or installing servers, a little experience using SVN on Windows, but I still have problems connecting to it remotely. I need to use no-ip.com to change my IP, however; I failed to make VisualSVN server to work with no-ip. What options do I have? The best thing is to get it work with Windows if not, I have another computer that is running Ubuntu 12.4.1, I have installed apache2svn on it trying to get it work, the svn is installed, I went through tutorials to configure accessing protocols, but I can't figure out how to access it remotely from another computer? Can someone tell me the steps I need to get this job done and I can do my search for each step? (Please explain each step as some keywords or phrases I may not be familiar with) EDIT: Also worth noting, that my company has a website hosted on a remote server, can we use it as a repo? and how? It's running Linux

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Designing interfaces: predict methods needed, discipline yourself and deal with code that comes to m

    - by fireeyedboy
    Was: Design by contract: predict methods needed, discipline yourself and deal with code that comes to mind I like the idea of designing by contract a lot (at least, as far as I understand the principal). I believe it means you define intefaces first before you start implementing actual code, right? However, from my limited experience (3 OOP years now) I usually can't resist the urge to start coding pretty early, for several reasons: because my limited experience has shown me I am unable to predict what methods I will be needing in the interface, so I might as well start coding right away. or because I am simply too impatient to write out the whole interfaces first. or when I do try it, I still wind up implementing bits of code already, because I fear I might forget this or that imporant bit of code, that springs to mind when I am designing the interfaces. As you see, especially with the last two points, this leads to a very disorderly way of doing things. Tasks get mixed up. I should draw a clear line between designing interfaces and actual coding. If you, unlike me, are a good/disciplined planner, as intended above, how do you: ...know the majority of methods you will be needing up front so well? Especially if it's components that implement stuff you are not familiar with yet. ...resist the urge to start coding right away? ...deal with code that comes to mind when you are designing the interfaces? UPDATE: Thank you for the answers so far. Valuable insights! And... I stand corrected; it seems I misinterpreted the idea of Design By Contract. For clarity, what I actually meant was: "coming up with interface methods before implementing the actual components". An additional thing that came up in my mind is related to point 1): b) How do you know the majority of components you will be needing. How do you flesh out these things before you start actually coding? For arguments sake, let's say I'm a novice with the MVC pattern, and I wanted to implement such a component/architecture. A naive approach would be to think of: a front controller some abstract action controller some abstract view ... and be done with it, so to speak. But, being more familiar with the MVC pattern, I know now that it makes sense to also have: a request object a router a dispatcher a response object view helpers etc.. etc.. If you map this idea to some completely new component you want to develop, with which you have no experience yet; how do you come up with these sort of additional components without actually coding the thing, and stuble upon the ideas that way? How would you know up front how fine grained some components should be? Is this a matter of disciplining yourself to think it out thoroughly? Or is it a matter of being good at thinking in abstractions?

    Read the article

  • 3D game engine for networked world simulation / AI sandbox

    - by Martin
    More than 5 years ago I was playing with DirectSound and Direct3D and I found it really exciting although it took much time to get some good results with C++. I was a college student then. Now I have mostly enterprise development experience in C# and PHP, and I do it for living. There is really no chance to earn money with serious game development in our country. Each day more and more I find that I miss something. So I decided to spend an hour or so each day to do programming for fun. So my idea is to build a world simulation. I would like to begin with something simple - some human-like creatures that live their life - like Sims 3 but much more simple, just basic needs, basic animations, minimum graphic assets - I guess it won't be a city but just a large house for a start. The idea is to have some kind of a server application which stores the world data in MySQL database, and some client applications - body-less AI bots which simulate movement and some interactions with the world and each other. But it wouldn't be fun without 3D. So there are also 3D clients - I can enter that virtual world and see the AI bots living. When the bot enters visible area, it becomes material - loads a mesh and animations, so I can see it. When I leave, the bots lose their 3d mesh bodies again, but their virtual life still continues. With time I hope to make it like some expandable scriptable sandbox to experiment with various AI algorithms and so on. But I am not intended to create a full-blown MMORPG :D I have looked for many possible things I would need (free and open source) and now I have to make a choice: OGRE3D + enet (or RakNet). Old good C++. But won't it slow me down so much that I won't have fun any more? CrystalSpace. Formally not a game engine but very close to that. C++ again. MOgre (OGRE3D wrapper for .NET) + lidgren (networking library which is already used in some gaming projects). Good - I like C#, it is good for fast programming and also can be used for scripting. XNA seems just a framework, not an engine, so really have doubts, should I even look at XNA Game Studio :( Panda3D - full game engine with positive feedback. I really like idea to have all the toolset in one package, it has good reviews as a beginner-friendly engine...if you know Python. On the C++ side, Panda3D has almost non-existent documentation. I have 0 experience with Python, but I've heard it is easy to learn. And if it will be fun and challenging then I guess I would benefit from experience in one more programming language. Which of those would you suggest, not because of advanced features or good platform support but mostly for fun, easy workflow and expandability, and so I can create and integrate all the components I need - the server with the database, AI bots and a 3D client application?

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • Create Views for object properties in model in MVC 3 application?

    - by Anders Svensson
    I have an Asp.Net MVC 3 application with a database "Consultants", accessed by EF. Now, the Consultant table in the db has a one-to-many relationship to several other tables for CV type information (work experience, etc). So a user should be able to fill in their name etc once, but should be able to add a number of "work experiences", and so on. But these foreign key tables are complex objects in the model, and when creating the Create View I only get the simple properties as editor fields. How do I go about designing the View or Views so that the complex objects can be filled in as well? I picture a View in my mind where the simple properties are simple fields, and then some sort of control where you can click "add work experience", and as many as needed would be added. But how would I do that and still utilize the model binding? In fact, I don't know how to go about it at all. (BTW, Program and Language stand for things like software experience in general, and natural language competence, not programming languages, in case you're wondering about the relationships there). Any ideas greatly appreciated! Here's the Create View created by the add View command by default: @{ ViewBag.Title = "Create"; } <h2>Create</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>Consultant</legend> <div class="editor-label"> @Html.LabelFor(model => model.FirstName) </div> <div class="editor-field"> @Html.EditorFor(model => model.FirstName) @Html.ValidationMessageFor(model => model.FirstName) </div> <div class="editor-label"> @Html.LabelFor(model => model.LastName) </div> <div class="editor-field"> @Html.EditorFor(model => model.LastName) @Html.ValidationMessageFor(model => model.LastName) </div> <div class="editor-label"> @Html.LabelFor(model => model.UserName) </div> <div class="editor-field"> @Html.EditorFor(model => model.UserName) @Html.ValidationMessageFor(model => model.UserName) </div> <div class="editor-label"> @Html.LabelFor(model => model.Description) </div> <div class="editor-field"> @Html.EditorFor(model => model.Description) @Html.ValidationMessageFor(model => model.Description) </div> <p> <input type="submit" value="Create" /> </p> </fieldset> } <div> @Html.ActionLink("Back to List", "Index") </div> And here's the EF database diagram:

    Read the article

  • Run Windows in Ubuntu with VMware Player

    - by Matthew Guay
    Are you an enthusiast who loves their Ubuntu Linux experience but still needs to use Windows programs?  Here’s how you can get the full Windows experience on Ubuntu with the free VMware Player. Linux has become increasingly consumer friendly, but still, the wide majority of commercial software is only available for Windows and Macs.  Dual-booting between Windows and Linux has been a popular option for years, but this is a frustrating solution since you have to reboot into the other operating system each time you want to run a specific application.  With virtualization, you’ll never have to make this tradeoff.  VMware Player makes it quick and easy to install any edition of Windows in a virtual machine.  With VMware’s great integration tools, you can copy and paste between your Linux and Windows programs and even run native Windows applications side-by-side with Linux ones. Getting Started Download the latest version of VMware Player for Linux, and select either the 32-bit or 64-bit version, depending on your system.  VMware Player is a free download, but requires registration.  Sign in with your VMware account, or create a new one if you don’t already have one. VMware Player is fairly easy to install on Linux, but you will need to start out the installation from the terminal.  First, enter the following to make sure the installer is marked as executable, substituting version/build_number for the version number on the end of the file you downloaded. chmod +x ./VMware-Player-version/build_number.bundle Then, enter the following to start the install, again substituting your version number: gksudo bash ./VMware-Player-version/build_number.bundle You may have to enter your administrator password to start the installation, and then the VMware Player graphical installer will open.  Choose whether you want to check for product updates and submit usage data to VMware, and then proceed with the install as normal. VMware Player installed in only a few minutes in our tests, and was immediately ready to run, no reboot required.  You can now launch it from your Ubuntu menu: click Applications \ System Tools \ VMware Player. You’ll need to accept the license agreement the first time you run it. Welcome to VMware Player!  Now you can create new virtual machines and run pre-built ones on your Ubuntu desktop. Install Windows in VMware Player on Ubuntu Now that you’ve got VMware setup, it’s time to put it to work.  Click the Create a New Virtual Machine as above to start making a Windows virtual machine. In the dialog that opens, select your installer disk or ISO image file that you want to install Windows from.  In this example, we’re select a Windows 7 ISO.  VMware will automatically detect the operating system on the disk or image.  Click Next to continue. Enter your Windows product key, select the edition of Windows to install, and enter your name and password. You can leave the product key field blank and enter it later.  VMware will ask if you want to continue without a product key, so just click Yes to continue. Now enter a name for your virtual machine and select where you want to save it.  Note: This will take up at least 15Gb of space on your hard drive during the install, so make sure to save it on a drive with sufficient storage space. You can choose how large you want your virtual hard drive to be; the default is 40Gb, but you can choose a different size if you wish.  The entire amount will not be used up on your hard drive initially, but the virtual drive will increase in size up to your maximum as you add files.  Additionally, you can choose if you want the virtual disk stored as a single file or as multiple files.  You will see the best performance by keeping the virtual disk as one file, but the virtual machine will be more portable if it is broken into smaller files, so choose the option that will work best for your needs. Finally, review your settings, and if everything looks good, click Finish to create the virtual machine. VMware will take over now, and install Windows without any further input using its Easy Install.  This is one of VMware’s best features, and is the main reason we find it the easiest desktop virtualization solution to use.   Installing VMware Tools VMware Player doesn’t include the VMware Tools by default; instead, it automatically downloads them for the operating system you’re installing.  Once you’ve downloaded them, it will use those tools anytime you install that OS.  If this is your first Windows virtual machine to install, you may be prompted to download and install them while Windows is installing.  Click Download and Install so your Easy Install will finish successfully. VMware will then download and install the tools.  You may need to enter your administrative password to complete the install. Other than this, you can leave your Windows install unattended; VMware will get everything installed and running on its own. Our test setup took about 30 minutes, and when it was done we were greeted with the Windows desktop ready to use, complete with drivers and the VMware tools.  The only thing missing was the Aero glass feature.  VMware Player is supposed to support the Aero glass effects in virtual machines, and although this works every time when we use VMware Player on Windows, we could not get it to work in Linux.  Other than that, Windows is fully ready to use.  You can copy and paste text, images, or files between Ubuntu and Windows, or simply drag-and-drop files between the two. Unity Mode Using Windows in a window is awkward, and makes your Windows programs feel out of place and hard to use.  This is where Unity mode comes in.  Click Virtual Machine in VMware’s menu, and select Enter Unity. Your Windows desktop will now disappear, and you’ll see a new Windows menu underneath your Ubuntu menu.  This works the same as your Windows Start Menu, and you can open your Windows applications and files directly from it. By default, programs from Windows will have a colored border and a VMware badge in the corner.  You can turn this off from the VMware settings pane.  Click Virtual Machine in VMware’s menu and select Virtual Machine Settings.  Select Unity under the Options tab, and uncheck the Show borders and Show badges boxes if you don’t want them. Unity makes your Windows programs feel at home in Ubuntu.  Here we have Word 2010 and IE8 open beside the Ubuntu Help application.  Notice that the Windows applications show up in the taskbar on the bottom just like the Linux programs.  If you’re using the Compiz graphics effects in Ubuntu, your Windows programs will use them too, including the popular wobbly windows effect. You can switch back to running Windows inside VMware Player’s window by clicking the Exit Unity button in the VMware window. Now, whenever you want to run Windows applications in Linux, you can quickly launch it from VMware Player. Conclusion VMware Player is a great way to run Windows on your Linux computer.  It makes it extremely easy to get Windows installed and running, lets you run your Windows programs seamlessly alongside your Linux ones.  VMware products work great in our experience, and VMware Player on Linux was no exception. If you’re a Windows user and you’d like to run Ubuntu on Windows, check out our article on how to Run Ubuntu in Windows with VMware Player. Link Download VMware Player 3 (Registration required) Download Windows 7 Enterprise 90-day trial Similar Articles Productive Geek Tips Enable Copy and Paste from Ubuntu VMware GuestInstall VMware Tools on Ubuntu Edgy EftRestart the Ubuntu Gnome User Interface QuicklyHow to Add a Program to the Ubuntu Startup List (After Login)How To Run Ubuntu in Windows 7 with VMware Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • Mauritius Software Craftsmanship Community

    There we go! I finally managed to push myself forward and pick up an old, actually too old, idea since I ever arrived here in Mauritius more than six years ago. I'm talking about a community for all kind of ICT connected people. In the past (back in Germany), I used to be involved in various community activities. For example, I was part of the Microsoft Community Leader/Influencer Program (CLIP) in Germany due to an FAQ on Visual FoxPro, actually Active FoxPro Pages (AFP) to be more precise. Then in 2003/2004 I addressed the responsible person of the dFPUG user group in Speyer in order to assist him in organising monthly user group meetings. Well, he handed over management completely, and attended our meetings regularly. Why did it take you so long? Well, I don't want to bother you with the details but short version is that I was too busy on either job (building up new companies) or private life (got married and we have two lovely children, eh 'monsters') or even both. But now is the time where I was starting to look for new fields given the fact that I gained some spare time. My businesses are up and running, the kids are in school, and I am finally in a position where I can commit myself again to community activities. And I love to do that! Why a new user group? Good question... And 'easy' to answer. Since back in 2007 I did my usual research, eh Google searches, to see whether there existing user groups in Mauritius and in which field of interest. And yes, there are! If I recall this correctly, then there are communities for PHP, Drupal, Python (just recently), Oracle, and Linux (which used to be even two). But... either they do not exist anymore, they are dormant, or there is only a low heart-beat, frankly speaking. And yes, I went to meetings of the Linux User Group Meta (Mauritius) back in 2010/2011 and just recently. I really like the setup and the way the LUGM is organised. It's just that I have a slightly different point of view on how a user group or community should organise itself and how to approach future members. Don't get me wrong, I'm not criticizing others doing a very good job, I'm only saying that I'd like to do it differently. The last meeting of the LUGM was awesome; read my feedback about it. Ok, so what's up with 'Mauritius Software Craftsmanship Community' or short: MSCC? As I've already written in my article on 'Communities - The importance of exchange and discussion' I think it is essential in a world of IT to stay 'connected' with a good number of other people in the same field. There is so much dynamic and every day's news that it is almost impossible to keep on track with all of them. The MSCC is going to provide a common platform to exchange experience and share knowledge between each other. You might be a newbie and want to know what to expect working as a software developer, or as a database administrator, or maybe as an IT systems administrator, or you're an experienced geek that loves to share your ideas or solutions that you implemented to solve a specific problem, or you're the business (or HR) guy that is looking for 'fresh' blood to enforce your existing team. Or... you're just interested and you'd like to communicate with like-minded people. Meetup of 26.06.2013 @ L'arabica: Of course there are laptops around. Free WiFi, power outlet, coffee, code and Linux in one go. The MSCC is technology-agnostic and spans an umbrella over any kind of technology. Simply because you can't ignore other technologies anymore in a connected IT world as we have. A front-end developer for iOS applications should have the chance to connect with a Python back-end coder and eventually with a DBA for MySQL or PostgreSQL and exchange their experience. Furthermore, I'm a huge fan of cross-platform development, and it is very pleasant to have pure Web developers - with all that HTML5, CSS3, JavaScript and JS libraries stuff - and passionate C# or Java coders at the same table. This diversity of knowledge can assist and boost your personal situation. And last but not least, there are projects and open positions 'flying' around... People might like to hear others opinion about an employer or get new impulses on how to tackle down an issue at their workspace, etc. This is about community. And that's how I see the MSCC in general - free of any limitations be it by programming language or technology. Having the chance to exchange experience and to discuss certain aspects of technology saves you time and money, and it's a pleasure to enjoy. Compared to dusty books and remote online resources. It's human! Organising meetups (meetings, get-together, gatherings - you name it!) As of writing this article, the MSCC is currently meeting every Wednesday for the weekly 'Code & Coffee' session at various locations (suggestions are welcome!) in Mauritius. This might change in the future eventually but especially at the beginning I think it is very important to create awareness in the Mauritian IT world. Yes, we are here! Come and join us! ;-) The MSCC's main online presence is located at Meetup.com because it allows me to handle the organisation of events and meeting appointments very easily, and any member can have a look who else is involved so that an exchange of contacts is given at any time. In combination with the other entities (G+ Communities, FB Pages or in Groups) I advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there is a Trello workspace to collect and share ideas and provide downloads of slides, etc. Trello is also available as free smartphone app. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians! Recent updates The MSCC is now officially participating in the O'Reilly UK User Group programm and we are allowed to request review or recension copies of recent titles. Additionally, we have a discount code for any books or ebooks that you might like to order on shop.oreilly.com. More applications for user group sponsorship programms are pending and I'm looking forward to a couple of announcement very soon. And... we need some kind of 'corporate identity' - Over at the MSCC website there is a call for action (or better said a contest with prizes) to create a unique design for the MSCC. This would include a decent colour palette, a logo, graphical banners for Meetup, Google+, Facebook, LinkedIn, etc. and of course badges for our craftsmen to add to their personal blogs and websites. Please spread the word and contribute. Thanks!

    Read the article

  • What You Need to Know About Windows 8.1

    - by Chris Hoffman
    Windows 8.1 is available to everyone starting today, October 19. The latest version of Windows improves on Windows 8 in every way. It’s a big upgrade, whether you use the desktop or new touch-optimized interface. The latest version of Windows has been dubbed “an apology” by some — it’s definitely more at home on a desktop PC than Windows 8 was. However, it also offers a more fleshed out and mature tablet experience. How to Get Windows 8.1 For Windows 8 users, Windows 8.1 is completely free. It will be available as a download from the Windows Store — that’s the “Store” app in the Modern, tiled interface. Assuming upgrading to the final version will be just like upgrading to the preview version, you’ll likely see a “Get Windows 8.1″ pop-up that will take you to the Windows Store and guide you through the download process. You’ll also be able to download ISO images of Windows 8.1, so can perform a clean install to upgrade. On any new computer, you can just install Windows 8.1 without going through Windows 8. New computers will start to ship with Windows 8.1 and boxed copies of Windows 8 will be replaced by boxed copies of Windows 8.1. If you’re using Windows 7 or a previous version of Windows, the update won’t be free. Getting Windows 8.1 will cost you the same amount as a full copy of Windows 8 — $120 for the standard version. If you’re an average Windows 7 user, you’re likely better off waiting until you buy a new PC with Windows 8.1 included rather than spend this amount of money to upgrade. Improvements for Desktop Users Some have dubbed Windows 8.1 “an apology” from Microsoft, although you certainly won’t see Microsoft referring to it this way. Either way, Steven Sinofsky, who presided over Windows 8′s development, left the company shortly after Windows 8 was released. Coincidentally, Windows 8.1 contains many features that Steven Sinofsky and Microsoft refused to implement. Windows 8.1 offers the following big improvements for desktop users: Boot to Desktop: You can now log in directly to the desktop, skipping the tiled interface entirely. Disable Top-Left and Top-Right Hot Corners: The app switcher and charms bar won’t appear when you move your mouse to the top-left or top-right corners of the screen if you enable this option. No more intrusions into the desktop. The Start Button Returns: Windows 8.1 brings back an always-present Start button on the desktop taskbar, dramatically improving discoverability for new Windows 8 users and providing a bigger mouse target for remote desktops and virtual machines. Crucially, the Start menu isn’t back — clicking this button will open the full-screen Modern interface. Start menu replacements will continue to function on Windows 8.1, offering more traditional Start menus. Show All Apps By Default: Luckily, you can hide the Start screen and its tiles almost entirely. Windows 8.1 can be configured to show a full-screen list of all your installed apps when you click the Start button, with desktop apps prioritized. The only real difference is that the Start menu is now a full-screen interface. Shut Down or Restart From Start Button: You can now right-click the Start button to access Shut down, Restart, and other power options in just as many clicks as you could on Windows 7. Shared Start Screen and Desktop Backgrounds; Windows 8 limited you to just a few Steven Sinofsky-approved background images for your Start screen, but Windows 8.1 allows you to use your desktop background on the Start screen. This can make the transition between the Start screen and desktop much less jarring. The tiles or shortcuts appear to be floating above the desktop rather than off in their own separate universe. Unified Search: Unified search is back, so you can start typing and search your programs, settings, and files all at once — no more awkwardly clicking between different categories when trying to open a Control Panel screen or search for a file. These all add up to a big improvement when using Windows 8.1 on the desktop. Microsoft is being much more flexible — the Start menu is full screen, but Microsoft has relented on so many other things and you’d never have to see a tile if you didn’t want to. For more information, read our guide to optimizing Windows 8.1 for a desktop PC. These are just the improvements specifically for desktop users. Windows 8.1 includes other useful features for everyone, such as deep SkyDrive integration that allows you to store your files in the cloud without installing any additional sync programs. Improvements for Touch Users If you have a Windows 8 or Windows RT tablet or another touch-based device you use the interface formerly known as Metro on, you’ll see many other noticeable improvements. Windows 8′s new interface was half-baked when it launched, but it’s now much more capable and mature. App Updates: Windows 8′s included apps were extremely limited in many cases. For example, Internet Explorer 10 could only display ten tabs at a time and the Mail app was a barren experience devoid of features. In Windows 8.1, some apps — like Xbox Music — have been redesigned from scratch, Internet Explorer allows you to display a tab bar on-screen all the time, while apps like Mail have accumulated quite a few useful features. The Windows Store app has been entirely redesigned and is less awkward to browse. Snap Improvements: Windows 8′s Snap feature was a toy, allowing you to snap one app to a small sidebar at one side of your screen while another app consumed most of your screen. Windows 8.1 allows you to snap two apps side-by-side, seeing each app’s full interface at once. On larger displays, you can even snap three or four apps at once. Windows 8′s ability to use multiple apps at once on a tablet is compelling and unmatched by iPads and Android tablets. You can also snap two of the same apps side-by-side — to view two web pages at once, for example. More Comprehensive PC Settings: Windows 8.1 offers a more comprehensive PC settings app, allowing you to change most system settings in a touch-optimized interface. You shouldn’t have to use the desktop Control Panel on a tablet anymore — or at least not as often. Touch-Optimized File Browsing: Microsoft’s SkyDrive app allows you to browse files on your local PC, finally offering a built-in, touch-optimized way to manage files without using the desktop. Help & Tips: Windows 8.1 includes a Help+Tips app that will help guide new users through its new interface, something Microsoft stubbornly refused to add during development. There’s still no “Modern” version of Microsoft Office apps (aside from OneNote), so you’ll still have to head to use desktop Office apps on tablets. It’s not perfect, but the Modern interface doesn’t feel anywhere near as immature anymore. Read our in-depth look at the ways Microsoft’s Modern interface, formerly known as Metro, is improved in Windows 8.1 for more information. In summary, Windows 8.1 is what Windows 8 should have been. All of these improvements are on top of the many great desktop features, security improvements, and all-around battery life and performance optimizations that appeared in Windows 8. If you’re still using Windows 7 and are happy with it, there’s probably no reason to race out and buy a copy of Windows 8.1 at the rather high price of $120. But, if you’re using Windows 8, it’s a big upgrade no matter what you’re doing. If you buy a new PC and it comes with Windows 8.1, you’re getting a much more flexible and comfortable experience. If you’re holding off on buying a new computer because you don’t want Windows 8, give Windows 8.1 a try — yes, it’s different, but Microsoft has compromised on the desktop while making a lot of improvements to the new interface. You just might find that Windows 8.1 is now a worthwhile upgrade, even if you only want to use the desktop.     

    Read the article

  • Creating HTML5 Offline Web Applications with ASP.NET

    - by Stephen Walther
    The goal of this blog entry is to describe how you can create HTML5 Offline Web Applications when building ASP.NET web applications. I describe the method that I used to create an offline Web application when building the JavaScript Reference application. You can read about the HTML5 Offline Web Application standard by visiting the following links: Offline Web Applications Firefox Offline Web Applications Safari Offline Web Applications Currently, the HTML5 Offline Web Applications feature works with all modern browsers with one important exception. You can use Offline Web Applications with Firefox, Chrome, and Safari (including iPhone Safari). Unfortunately, however, Internet Explorer does not support Offline Web Applications (not even IE 9). Why Build an HTML5 Offline Web Application? The official reason to build an Offline Web Application is so that you do not need to be connected to the Internet to use it. For example, you can use the JavaScript Reference Application when flying in an airplane, riding a subway, or hiding in a cave in Borneo. The JavaScript Reference Application works great on my iPhone even when I am completely disconnected from any network. The following screenshot shows the JavaScript Reference Application running on my iPhone when airplane mode is enabled (notice the little orange airplane):   Admittedly, it is becoming increasingly difficult to find locations where you can’t get Internet access. A second, and possibly better, reason to create Offline Web Applications is speed. An Offline Web Application must be downloaded only once. After it gets downloaded, all of the files required by your Web application (HTML, CSS, JavaScript, Image) are stored persistently on your computer. Think of Offline Web Applications as providing you with a super browser cache. Normally, when you cache files in a browser, the files are cached on a file-by-file basis. For each HTML, CSS, image, or JavaScript file, you specify how long the file should remain in the cache by setting cache headers. Unlike the normal browser caching mechanism, the HTML5 Offline Web Application cache is used to specify a caching policy for an entire set of files. You use a manifest file to list the files that you want to cache and these files are cached until the manifest is changed. Another advantage of using the HTML5 offline cache is that the HTML5 standard supports several JavaScript events and methods related to the offline cache. For example, you can be notified in your JavaScript code whenever the offline application has been updated. You can use JavaScript methods, such as the ApplicationCache.update() method, to update the cache programmatically. Creating the Manifest File The HTML5 Offline Cache uses a manifest file to determine the files that get cached. Here’s what the manifest file looks like for the JavaScript Reference application: CACHE MANIFEST # v30 Default.aspx # Standard Script Libraries Scripts/jquery-1.4.4.min.js Scripts/jquery-ui-1.8.7.custom.min.js Scripts/jquery.tmpl.min.js Scripts/json2.js # App Scripts App_Scripts/combine.js App_Scripts/combine.debug.js # Content (CSS & images) Content/default.css Content/logo.png Content/ui-lightness/jquery-ui-1.8.7.custom.css Content/ui-lightness/images/ui-bg_glass_65_ffffff_1x400.png Content/ui-lightness/images/ui-bg_glass_100_f6f6f6_1x400.png Content/ui-lightness/images/ui-bg_highlight-soft_100_eeeeee_1x100.png Content/ui-lightness/images/ui-icons_222222_256x240.png Content/ui-lightness/images/ui-bg_glass_100_fdf5ce_1x400.png Content/ui-lightness/images/ui-bg_diagonals-thick_20_666666_40x40.png Content/ui-lightness/images/ui-bg_gloss-wave_35_f6a828_500x100.png Content/ui-lightness/images/ui-icons_ffffff_256x240.png Content/ui-lightness/images/ui-icons_ef8c08_256x240.png Content/browsers/c8.png Content/browsers/es3.png Content/browsers/es5.png Content/browsers/ff3_6.png Content/browsers/ie8.png Content/browsers/ie9.png Content/browsers/sf5.png NETWORK: Services/EntryService.svc http://superexpert.com/resources/JavaScriptReference/ A Cache Manifest file always starts with the line of text Cache Manifest. In the manifest above, all of the CSS, image, and JavaScript files required by the JavaScript Reference application are listed. For example, the Default.aspx ASP.NET page, jQuery library, JQuery UI library, and several images are listed. Notice that you can add comments to a manifest by starting a line with the hash character (#). I use comments in the manifest above to group JavaScript and image files. Finally, notice that there is a NETWORK: section of the manifest. You list any file that you do not want to cache (any file that requires network access) in this section. In the manifest above, the NETWORK: section includes the URL for a WCF Service named EntryService.svc. This service is called to get the JavaScript entries displayed by the JavaScript Reference. There are two important things that you need to be aware of when using a manifest file. First, all relative URLs listed in a manifest are resolved relative to the manifest file. The URLs listed in the manifest above are all resolved relative to the root of the application because the manifest file is located in the application root. Second, whenever you make a change to the manifest file, browsers will download all of the files contained in the manifest (all of them). For example, if you add a new file to the manifest then any browser that supports the Offline Cache standard will detect the change in the manifest and download all of the files listed in the manifest automatically. If you make changes to files in the manifest (for example, modify a JavaScript file) then you need to make a change in the manifest file in order for the new version of the file to be downloaded. The standard way of updating a manifest file is to include a comment with a version number. The manifest above includes a # v30 comment. If you make a change to a file then you need to modify the comment to be # v31 in order for the new file to be downloaded. When Are Updated Files Downloaded? When you make changes to a manifest, the changes are not reflected the very next time you open the offline application in your web browser. Your web browser will download the updated files in the background. This can be very confusing when you are working with JavaScript files. If you make a change to a JavaScript file, and you have cached the application offline, then the changes to the JavaScript file won’t appear when you reload the application. The HTML5 standard includes new JavaScript events and methods that you can use to track changes and make changes to the Application Cache. You can use the ApplicationCache.update() method to initiate an update to the application cache and you can use the ApplicationCache.swapCache() method to switch to the latest version of a cached application. My heartfelt recommendation is that you do not enable your application for offline storage until after you finish writing your application code. Otherwise, debugging the application can become a very confusing experience. Offline Web Applications versus Local Storage Be careful to not confuse the HTML5 Offline Web Application feature and HTML5 Local Storage (aka DOM storage) feature. The JavaScript Reference Application uses both features. HTML5 Local Storage enables you to store key/value pairs persistently. Think of Local Storage as a super cookie. I describe how the JavaScript Reference Application uses Local Storage to store the database of JavaScript entries in a separate blog entry. Offline Web Applications enable you to store static files persistently. Think of Offline Web Applications as a super cache. Creating a Manifest File in an ASP.NET Application A manifest file must be served with the MIME type text/cache-manifest. In order to serve the JavaScript Reference manifest with the proper MIME type, I added two files to the JavaScript Reference Application project: Manifest.txt – This text file contains the actual manifest file. Manifest.ashx – This generic handler sends the Manifest.txt file with the MIME type text/cache-manifest. Here’s the code for the generic handler: using System.Web; namespace JavaScriptReference { public class Manifest : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/cache-manifest"; context.Response.WriteFile(context.Server.MapPath("Manifest.txt")); } public bool IsReusable { get { return false; } } } } The Default.aspx file contains a reference to the manifest. The opening HTML tag in the Default.aspx file looks like this: <html manifest="Manifest.ashx"> Notice that the HTML tag contains a manifest attribute that points to the Manifest.ashx generic handler. Internet Explorer simply ignores this attribute. Every other modern browser will download the manifest when the Default.aspx page is requested. Seeing the Offline Web Application in Action The experience of using an HTML5 Web Application is different with different browsers. When you first open the JavaScript Reference application with Firefox, you get the following warning: Notice that you are provided with the choice of whether you want to use the application offline or not. Browsers other than Firefox, such as Chrome and Safari, do not provide you with this choice. Chrome and Safari will create an offline cache automatically. If you click the Allow button then Firefox will download all of the files listed in the manifest. You can view the files contained in the Firefox offline application cache by typing about:cache in the Firefox address bar: You can view the actual items being cached by clicking the List Cache Entries link: The Offline Web Application experience is different in the case of Google Chrome. You can view the entries in the offline cache by opening the Developer Tools (hit Shift+CTRL+I), selecting the Storage tab, and selecting Application Cache: Notice that you view the status of the Application Cache. In the screen shot above, the status is UNCACHED which means that the files listed in the manifest have not been downloaded and cached yet. The different possible values for the status are included in the HTML5 Offline Web Application standard: UNCACHED – The Application Cache has not been initialized. IDLE – The Application Cache is not currently being updated. CHECKING – The Application Cache is being fetched and checked for updates. DOWNLOADING – The files in the Application Cache are being updated. UPDATEREADY – There is a new version of the Application. OBSOLETE – The contents of the Application Cache are obsolete. Summary In this blog entry, I provided a description of how you can use the HTML5 Offline Web Application feature in the context of an ASP.NET application. I described how this feature is used with the JavaScript Reference Application to store the entire application on a user’s computer. By taking advantage of this new feature of the HTML5 standard, you can improve the performance of your ASP.NET web applications by requiring users of your web application to download your application once and only once. Furthermore, you can enable users to take advantage of your applications anywhere -- regardless of whether or not they are connected to the Internet.

    Read the article

  • Oracle WebCenter Partner Program

    - by kellsey.ruppel
    In competitive marketplaces, your company needs to quickly respond to changes and new trends, in order to open opportunities and build long-term growth. Oracle has a variety of next-generation services, solutions and resources that will leverage the differentiators in your offerings. Name your partnering needs: Oracle has the answer. This week we’d like to focus on Partners and the value your organization can gain from working with the Oracle PartnerNetwork. The Oracle PartnerNetwork will empower your company with exceptional resources to distinguish your offerings from the competition, seize opportunities, and increase your sales. We’re happy to welcome Christine Kungl, and Brian Buzzell, from Oracle’s World Wide Alliances & Channels (WWA&C) WebCenter Partner Enablement team, as today’s guests on the Oracle WebCenter blog. Q: What is the Oracle PartnerNetwork (OPN)?A: Christine: Oracle’s PartnerNetwork (OPN) is a collaborative partnership which allows registered companies specific added value resources to help differentiate themselves from their competition. Through OPN programs it provides companies the ability to seize and target opportunities, educate and train their teams, and leverage unparalleled opportunity given Oracle’s large market footprint. OPN’s multi-level programs are targeted at different levels allowing companies to grow and evolve with Oracle based on their business needs.  As part of their OPN memberships partners are encouraged to become OPN Specialized allowing those partners additional differentiation in Oracle’s Partner Network Community.  Q: What is an OPN Specialization and what resources are available for Specialized Partners?A: Brian: Oracle wanted a better way for our partners to differentiate their special skills and expertise, as well a more effective way to communicate that difference to customers.  Oracle’s expanding product portfolio demanded that we be able to identify partners with significant product knowledge—those who had made an investment in Oracle and a continuing commitment to deliver Oracle solutions. And with more than 30,000 Oracle partners around the world, Oracle needed a way for our customers to choose the right partner for their business. So how did Oracle meet this need? With the new partner program:  Oracle PartnerNetwork (OPN) Specialized. In this new program, Oracle partners are: Specialized :  Differentiating themselves from the competition with expertise that set them apart Recognized:  Being acknowledged for investing in becoming Oracle experts in specialized areas. Preferred :  Connecting with potential customers who are seeking  value-added solutions for their business OPN Specialized provides all partners with educational opportunities, training, and tools specially designed to build competency and grow business.  Partners can serve their customers better through key resources:OPN Specialized Knowledge Zones – Located on the updated and enhanced OPN portal— provide a single point of entry for all education and training information for Oracle partners. Enablement 2.0 Resources —Enablement 2.0 helps Oracle partners build their competencies and skills through a variety of educational opportunities and expanded training choices. These resources include: Enablement 2.0 “Boot camps” provide three-tiered learning levels that help jump-start partner training The role-based training covers Oracle’s application and technology products and offers a combination of classroom lectures, hands-on lab exercises, and case studies. Enablement 2.0 Interactive guided learning paths (GLPs) with recommendations on how to achieve specialization Upgraded partner solution kits Enhanced, specialized business centers available 24/7 around the globe on the OPN portal OPN Competency Center—Tracking ProgressThe OPN Competency Center keeps track as a partner applies for and achieves specialization in selected areas. You start with an assessment that compares your organization’s current skills and experience with the requirements for specialization in the area you have chosen. The OPN Competency Center then provides a roadmap that itemizes the skills and the knowledge you need to earn specialized status. In summary, OPN Specialization not only includes key training resources but a way to track and show progression for your partner organization. Q: What is are the OPN Membership Levels and what are the benefits?A:  Christine: The base OPN membership levels are: Remarketer: At the Remarketer level, retailers can choose to resell select Oracle products with the backing of authorized, regionally located, value-added distributors (VADs). The Remarketer level has no fees and no partner agreement with Oracle, but does offer online training and sales tools through the OPN portal.Program Details: RemarketerSilver Level: The Silver level is for Oracle partners who are focused on reselling and developing business with products ordered through the Oracle 1-Click Ordering Program. The Silver level provides a cost-effective, yet scalable way for partners to start an OPN Specialized membership and offers a substantial set of benefits that lets partners increase their competitive positioning. Program Details: SilverGold Level: Gold-level partners have the ability to specialize, helping them grow their business and create differentiation in the marketplace. Oracle partners at the Gold level can develop, sell, or implement the full stack of Oracle solutions and can apply to resell Oracle Applications.Program Details: GoldPlatinum Level: The Platinum level is for Oracle partners who want the highest level of benefits and are committed to reaching a minimum of five specializations. Platinum partners are recognized for their expertise in a broad range of products and technology, and receive dedicated support from Oracle.Program Details: PlatinumIn addition we recently introduced a new level:Diamond Level: This level is the most prestigious level of OPN Specialized. It allows companies to differentiate further because of their focused depth and breadth of their expertise. Program Details: DiamondSo as you can see there are various levels cost effective ways that Partners can get assistance, differentiation through OPN membership. Q: What role does the Oracle's World Wide Alliances & Channels (WWA&C), Partner Enablement teams and the WebCenter Community play?  A: Brian: Oracle’s WWA&C teams are responsible for manage relationships, educating their teams, creating go-to-market solutions and fostering communities for Oracle partners worldwide.  The WebCenter Partner Enablement Middleware Team is tasked to create, manage and distribute Specialization resources for the WebCenter Partner community. Q: What WebCenter Specializations are currently available?A: Christine:  As of now here are the following WebCenter Specializations and their availability: Oracle WebCenter Portal Specialization (Oracle WebCenter Portal): Available NowThe Oracle WebCenter Specialization provides insight into the following products: WebCenter Services, WebCenter Spaces, and WebLogic Portal.Oracle WebCenter Specialized Partners can efficiently use Oracle WebCenter products to create social applications, enterprise portals, communities, composite applications, and Internet or intranet Web sites on a standards-based, service-oriented architecture (SOA). The suite combines the development of rich internet applications; a multi-channel portal framework; and a suite of horizontal WebCenter applications, which provide content, presence, and social networking capabilities to create a highly interactive user experience. Oracle WebCenter Content Specialization: Available NowThe Oracle WebCenter Content Specialization provides insight into the following products; Universal Content Management, WebCenter Records Management, WebCenter Imaging, WebCenter Distributed Capture, and WebCenter Capture.Oracle WebCenter Content Specialized Partners can efficiently build content-rich business applications, reuse content, and integrate hundreds of content services with other business applications. This allows our customers to decrease costs, automate processes, reduce resource bottlenecks, share content effectively, minimize the number of lost documents, and better manage risk. Oracle WebCenter Sites Specialization: Available Q1 2012Oracle WebCenter Sites is part of the broader Oracle WebCenter platform that provides organizations with a complete customer experience management solution.  Partners that align with the new Oracle WebCenter Sites platform allow their customers organizations to: Leverage customer information from all channels and systems Manage interactions across all channels Unify commerce, merchandising, marketing, and service across all channels Provide personalized, choreographed consumer journeys across all channels Integrate order orchestration, supply chain management and order fulfillment Q: What criteria does the Partner organization need to achieve Specialization? What about individual Sales, PreSales & Implementation Specialist/Technical consultants?A: Brian: Each Oracle WebCenter Specialization has unique Business Criteria that must be met in order to achieve that Specialization.  This includes a unique number of transactions (co-sell, re-sell, and referral), customer references and then unique number of specialists as part of a partner team (Sales, Pre-Sales, Implementation, and Support).   Each WebCenter Specialization provides training resources (GLPs, BootCamps, Assessments and Exams for individuals on a partner’s staff to fulfill those requirements.  That criterion can be found for each Specialization on the Specialize tab for each WebCenter Knowledge Zone.  Here are the sample criteria, recommended courses, exams for the WebCenter Portal Specialization: WebCenter Portal Specialization Criteria Q: Do you have any suggestions on the best way for partners to get started if they would like to know more?A: Christine:   The best way to start is for partners is look at their business and core Oracle team focus and then look to become specialized in one or more areas.  Once you have selected the Specializations that are right for your business, you need to follow the first 3 key steps described below. The fourth step outlines the additional process to follow if you meet the criteria to be Advanced Specialized. Note that Step 4 may not be done without first following Steps 1-3.1. Join the Knowledge Zone(s) where you want to achieve Specialized status Go to the Knowledge Zone lick on the "Why Partner" tab Click on the "Join Knowledge Zone" link 2. Meet the Specialization criteria - Define and implement plans in your organization to achieve the competency and business criteria targets of the Specialization. (Note: Worldwide OPN members at the Gold, Platinum, or Diamond level and their Associates at the Gold, Platinum, or Diamond level may count their collective resources to meet the business and competency criteria required for specialization in this area.) 3. Apply for Specialization – when you have met the business and competency criteria required, inform Oracle by completing the following steps: Click on the "Specialize" tab in the Knowledge Zone Click on the "Apply Now" button Complete the online application form Oracle will validate the information provided, and once approved, you will receive notification from Oracle of your awarded Specialized status. Need more information? Access our Step by Step Guide (PDF) 4. Apply for Advanced Specialization (Optional) – If your company has on staff 50 unique Certified Implementation Specialists in your company's approved Specialization's product set, let Oracle know by following these steps: Ensure that you have 50 or more unique individuals that are Certified Implementation Specialists in the specific Specialization awarded to your company If you are pooling resources from another Associate or Worldwide entity, ensure you know that company’s name and country Have your Oracle PRM Administrator complete the online Advanced Specialization Application Oracle will validate the information provided, and once approved, you will receive notification from Oracle of your awarded Advanced Specialized status. There are additional resources on OPN as well as the broader WebCenter Community: v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • NRF Online Merchandising Workshop: Where Online Retailers Are Focusing for Holiday and Beyond

    - by Rose Spicer-Oracle
    0 0 1 1204 6863 Oracle Corporation 57 16 8051 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Last month we attended the NRF Online Merchandising Workshop in LA, and it was a great opportunity to catch up with our customers, meet new retailers, and hear some great presentations from VF Corporation, Zazzle, Julep Beauty, Backcountry, eBags and more. The one-on-one conversations with Merchants and the keynote presentations carry the same themes across companies of all sizes and across verticals. With only 125 days left (and counting) until Black Friday, these conversations provided some great insight in to what’s top of mind for retailers during the most stressful time of their year, and a sneak peek in to what they will deliver this holiday season.  Some of the most popular topics were: When to start promoting for holiday: seems like a funny conversation to have in July, but a number of retailers said they already had their holiday shopping gift guides live on their site, and it was attracting a significant portion of their onsite traffic. When it comes to timing, most retailers were questioning when to begin their holiday promotions -- carefully balancing when to release pricing and specials, and knowing that customers are holding out for last-minute deals and price drops. Many retailers noted the frustrations around transparent pricing by Amazon and a few other mega-retailers last year, publishing their “lowest prices of the season” as early as October – ensuring shoppers that those prices were the best they could get all season long. Many retailers felt their hands were forced to drop prices. Others kept their set pricing with negative customer reaction, causing some to miss their holiday goals. The pressure is on, and most retailers identified November 1 as their target start date for the holiday promotions blitz. Some are even waiting for the big guys to release their “lowest prices of the season” guides and will then follow suit.      Attribution is tough – and a huge focus: understanding the path to conversion is a tough nut to crack, especially in the new omnichannel world where consumers use multiple touchpoints to make a single purchase, and internal management wants to know hard data. This has lead many retailers to invest in attribution; carefully tracking their online marketing efforts to determine what gets “credit” for the sale, instead of giving credit to the “last click.” Retailers noted that it is very difficult to determine the numbers when online and offline worlds collide – like when a shopper uses digital channels for research and then makes a purchase in a store. As one of the presenters from The North Face mentioned in her keynote, a key to enabling better customer service and satisfaction when it comes to converged online and offline sales is training the in-store staff, and creating a culture where it eventually “doesn’t matter what group gets the credit” if they all add to the sale. No doubt, the area of attribution will be a big area of retail investment in the coming years.      How to plan for the converged world: planning to ensure inventory gets where it needs to be was another concern. In conversations with retailers, we advised them to analyze customer patterns: where shoppers purchase items, where the items were sourced from and even where items are returned. This analysis is very valuable in determining inventory plans. From there, retailers can more accurately plan and allocate inventory to support both the online and offline customer behavior. As we head into the holiday season, the need for accurate enterprise-wide inventory visibility, and providing that information to associates, is even more critical to the brand-wide customer experience.       Improving the search / navigation / usability of the site(s): Aside from some of the big ideas and standard holiday pricing pressure, most conversations we had centered around continuing to improve the basics of the site. Reinvesting in search and navigation came up time and time again (FitForCommerce blogged about what a big topic it was at the event as well). Obviously getting shoppers on their path quickly and allowing them to find what they need fast is critical, but it was definitely interesting to hear just how much effort is still going in to honing the search and navigation experience. Adding new elements to search and navigation like typeahed, inventive navigation refinements, and new navigation categories like gift guides, specialized boutiques and flash sales were top of mind, in addition to searchandising and making search-driven product recommendations. (Oracle can help!)       Reducing cart abandonment: always a hot topic that is top of mind for every online retailer. Getting shoppers to the cart is often less then half the battle; getting them to click “buy” and complete the transaction is much more difficult. While retailers carefully study the checkout process and where shoppers tend to bounce, they know that how they design their checkout page is critical. We’re all online shoppers in our personal lives and we know how frustrating it can be when total prices are not transparent (i.e. shipping, processing, taxes is not included until the very last possible screen before clicking that buy button). Online retailers are struggling with where in the checkout process to surface the total price to be charged to reduce cart abandonment, while not showing the total figure too early in the process that it keeps shoppers from getting to checkout altogether. Recent research shows that providing total pricing prior to the checkout process dramatically reduces cart abandonment – as it serves as a filter to those shopping within a specific price band. Much of the cart abandonment discussion leads us to…       The free shipping / free returns question: it’s no secret that because of Amazon and programs like Prime, consumers expect free shipping, much to the chagrin of the smaller retailer. The reality is that if you’re not a mega-retailer, shipping is an expensive part of doing business that doesn’t allow most retailers to keep their prices low and offer free shipping. This has many retailers venturing out on the “free returns” path, especially in apparel. A number of retailers we spoke with are testing a flat rate shipping fee with free returns to see if they can crack the price threshold where shoppers are willing to pay for shipping with an added service. But, free shipping remains king.      Social ads and retargeting: they are working, but do they turn off consumers? That’s the big question. Every retailer we spoke with during a roundtable on the topic said that social ads and retargeting (where that pair of boots you’re been eyeing on a site magically follows you around the Internet) work and are meeting campaign goals. The larger question many retailers are asking is if this type of tactic is turning off a large number of shoppers, even if these campaigns are meeting their early goals. Retailers also mentioned that Facebook ads are working very well for them, especially when it comes to new customer acquisition, serving as a complimentary a channel to SEO when it comes to engaging new customers. While there are always new things to experiment with in retail, standard challenges are top of mind as retailers scramble to get ready for holiday. It will undoubtedly be another record-breaking online shopping season, but as retailers get more and more advanced with each Black Friday, expect some exciting things. This excitement needs to be backed by sound solutions and optimized operations. Then again, consumers are expecting more than ever, so I don’t doubt that retailers are already thinking about the possibilities of holiday 2015… and beyond. Customers who read this article, also found value in the following stories: Personalization for Retail: http://blogs.oracle.com/retail/entry/personalization_for_retailShop Direct User Experience Focus Drives Sales:https://blogs.oracle.com/retail/entry/shop_direct_user_experience_focusMaking Waves: Australian Online Retailer SurfStitch: https://blogs.oracle.com/oracleretail/entry/surf_stitchWhat’s new in Oracle Commerce v11.1 for RetailWhat the Content+Commerce Equation is Missing

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >