Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 544/1073 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • Oracle Delivers Special Recognition for Specialized Partners

    - by michaela.seika(at)oracle.com
    Since announcing Oracle PartnerNetwork Specialized (OPN Specialized) in October 2009, Oracle has been focused on building a program that first enables solution providers to become highly skilled Oracle partners who deliver value to customers and that then recognizes and rewards their achievements in a meaningful way. Today the company unveiled new benefits reserved for partners who have achieved one or more of the over 50 specializations currently available. The benefits demonstrate Oracle's commitment to showcase these valued partners to three key audiences: customers, other partners, and Oracle employees.With today's launch of www.oracle.com/specialized Oracle has taken what IDC believes is a first of its kind approach to putting top partners front and center with customers and prospects. While most vendors offer a business partner finder tool on their website none has gone as far as Oracle with the creation of this new site dedicated to the promotion of Specialized Partners. The tag lines - "Recognized by Oracle, Preferred by Customers" and "Specialized. Recognized. Preferred." gets right to the point - these are the solution providers with which customers should choose to engage. The contents of the page offer multiple proof points to justify the marketing phrases.One of the benefits Oracle offers its Specialized Partners is video creation and placement. While Oracle works with partners to create informal or "guerilla" videos which often are placed on YouTube to generate awareness and buzz, the company also produces professional videos for its partners. The greatest value the partner receives from this benefit isn't the non-trivial production costs that Oracle covers but instead the prominent exposure Oracle gives the finished product. Partner videos are featured on www.oracle.com/specialized, used as part of monthly OPN Specialized Partners monthly webcasts, placed on a customer facing website, the Oracle Media Network, which includes several partner sites such as PartnerCast. A solution provider gains a great deal of credibility when they can send a prospect to an Oracle website where they are featured. Read the full article here.

    Read the article

  • My search what the Cloud will mean for my Work, part 2

    - by Kay Sellenrode
    My experience with the cloud and why work will change and not disappear. Until now I have multiple experiences with the cloud, for the most good. i have worked on multiple cloud solutions in the past but let me describe them as 0.x versions. For me the 1st real serious cloud experience was a bit more than 1 year ago, when our company switched from an in house server to Microsoft BPOS as a complete replacement. Since we are a small consultancy firm and don’t have that much else to do than consulting, our IT requirements are quite simple. We need Mail and Storage space for our documents. With the in house server we had multiple outages during a year, mostly by lack of administering. Being consultants in the field and hardly having time to maintain a server, BPOS was and still is for us the right solution. Since the migration we have less outages and a much more robust solution. Have we run into issues with BPOS for our own environment? No not that I’m aware of. Based on this experience I made a stance about deploy ability of BPOS and cloud solutions, they are suitable for MKB (Dutch for Medium and Small Businesses). Most Small businesses don’t have the amount of work to hire a full time it admin. Hiring a service provider to maintain their own server might be even more costly than hiring an admin. So seeing the capabilities of BPOS and the needs of most businesses I see it as a great solution that gives the business a complete Server replacement solution for a fixed price per user. resulting in a clear budget for IT spending, something most small businesses were looking for, for a long time. So right now I’m deploying BPOS with a customer, and I run into some of the Cloud 1.0 issues. In my opinion BPOS is a good working Cloud version 1.0 solution. What do I mean with 1.0? Well 1.0 is mostly a tested solution (unlike 0.x versions) but still have quite some limitations caused by too few market experience. in my opnion this is also the reason why we don’t see that much BPOS customers yet and why I think Office 365 will make a huge difference. What I have seen of 365 shows me it is a Cloud 2.0 version, meaning it has all needed features and is much more flexible to the customer. This is also why I see changes happen in my work field, changes and not unemployment due to Cloud solutions. Cloud 1.0 solutions gave me the idea that if every customer would adopt them I would be out of work. But in reality Cloud 1.0 solutions are here just to set the market needs. The Cloud 2.0 and higher versions will give the customer much more flexibility, but also require the need for a consultant. Where the 1.0 versions are simple to setup and maintain, the 2.0 solution needs more thought upfront and afterwards. ie. BPOS in its 1.0 version brings you a very simplified Exchange 2007 solution, Suitable for some customers. Looking at Office 365 you receive almost a full blown Exchange 2010 solution. I expect this to be even more customizable in the next version. In my search for the changes to my work I try to regulary write a post with my thought around the Cloud and the impact on my work as a consultant. I'm also planning to present around this topic, so if anyone is interested to see me present around this topic, you're more than welcome to contact me.

    Read the article

  • Hidden files in Nautilus after extracting ISO

    - by Luis Alvarado
    I need to first point to the image below to explain a bit about what I find weird here: I extracted the informacion from an ISO I from Nautilus I could only see two folders but from the terminal I can see the rest of the files and folders. This folders do not have the . character in from of them to hide them from plain sight. When I try to "Show hidden files" in Nautilus, Nautilus closes itself. It does not show the hidden folders or files. Somehow they are hidden without using the normal dot in the beginning of the name. The have my user permission but no way of seeing them from Nautilus. I can interact with them but the fact that they appear hidden when I can see them inside the ISO and after extracting them they disappear is what confuses me. What permission or setting makes this folders appear hidden and does not let Nautilus show them and like I said before, trying to show them with the "Show hidden files" option crashes Nautilus and exits it. Forcing me to have to open Nautilus again from the Launcher.

    Read the article

  • Why don't we just fix Javascript?

    - by Jan Meyer
    Javascript sucks because of a few fatalities well pointed out by Douglas Crockford. We talk a lot about it. But the point here is, why we don't fix it? Coffeescript of course does that and a lot more. But the question here is another: if we provide a webservice that can convert one version of Javascript to the next, and so on, we can keep the language up to date. Such a conversion allows old code to run, albeit with an ever-increasing startup delay, as newer browsers convert old code to the new syntax. To avoid that delay, the site only needs to take the output of the code-transform and paste it in! The effort has immediate benefits for those businesses interested in the results. The rest can sleep tight: their code will continue to run. If we provide backward code-transformation also, then elder browsers can also run ANY new code! Migration scripts should be created by those that make changes to a language. Today they don't, which is in itself a fundamental omission! It should be am obvious part of their job to provide them, as their job isn't really done without them. The onus of making it work should be on them. With this system Any site will be able to run in Any browser, but new code will run best on the newest browsers. This way we reap the benefit of an up-to-date and productive development environment, where today we suffer, supposedly because of yesterday. This is a misconception. We are all trapped in committee-thinking, and we drag along things that only worsen our performance over time! We cause an ever increasing complexity that is hard to underestimate. Javascript is easily fixed. The fact is we don't. As an example, I have seen Patrick Michaud tackle the migration problem in PmWiki. It included forward migration scripts. Whenever syntax changes were made, a migration script was added to transform pages to the new syntax. As far as I know, ALL migrations have worked flawlessly. In other words, we don't tackle the migration problem, we just drag it along. We are incompetent! And why is that? Because technically incompetent people feel they must decide for us. Because they are incompetent, fear rules them. They are obnoxiously conservative, and we suffer the consequence of bad leadership. But the competent don't need to play by the same rules. They can (and must) change them. They are the path forward. It is about time to leave the past behind, and pursue the leanest meanest, no, eternal functionality. That would in and of itself revolutionize programming. So, why don't we stop whining and fix programming? Begin with Javascript and change the world. Even if the browser doesn't hook into this system, coders could. So language updaters should take it upon them to provide migration scripts. Once they exist, browsers may take advantage of them.

    Read the article

  • Gödel, Escher, Bach - Gödel's string

    - by Brad Urani
    In the book Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter, the author gives us a representation of the precursor to Gödel's string (Gödel's string's uncle) as: ~Ea,a': (I don't have the book in front of me but I think that's right). All that remains to get Gödel's string is to plug the Gödel number for this string into the free variable a''. What I don't understand is how to get the Gödel number for the functions PROOF-PAIR and ARITHMOQUINE. I understand how to write these functions in a programming language like FlooP (from the book) and could even write them myself in C# or Java, but the scheme that Hofstadter defines for Gödel numbering only applies to TNT (which is just his own syntax for natural number theory) and I don't see any way to write a procedure in TNT since it doesn't have any loops, variable assignments etc. Am I missing the point? Perhaps Gödel's string is not something that can actually be printed, but rather a theoretical string that need not actually be defined? I thought it would be neat to write a computer program that actually prints Gödel's string, or Gödel's string encoded by Gödel numbering (yes, I realize it would have a gazillion digits) but it seems like doing so requires some kind of procedural language and a Gödel numbering system for that procedural language that isn't included in the book. Of course once you had that, you could write a program that plugs random numbers into variable "a" and run procedure PROOF-PAIR on it to test for theoromhood of Gödel's string. If you let it run for a trillion years you might find a derivation that proves Gödel's string.

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

  • Trying to migrate Windows 7 install of Adobe CS5 to Ubuntu 12.04 with Wine - 'Internal errors - invalid paramters received"

    - by Don
    I have Adobe CS5 installed and running on the Windows 7 side of my machine. Since I'd hate to boot up into Windows just to use Photoshop, I'm trying to get it in Ubuntu 12.04. Tutorials I found suggested that the easiest way to have it in Ubuntu was to install Wine, and copy my Windows installation over. Here are the exact steps I've done up to this point. From Windows, exported the registry key for HKEY_LOCAL_MACHINE SOFTWARE Adobe to the desktop. Changed to Ubuntu, downloaded Wine from Software Center Terminal: $ sudo apt-get install wine ttf-mscorefonts-installer $ winecfg $ wget http://www.kegel.com/wine/winetricks $ sh winetricks msxml6 gdiplus gecko vcrun2005sp1 vcrun2008 msxml3 atmlib Moved registry export to home folder. Copied "Program Files (x86)\Adobe" to "~/.wine/drive_c/Program Files (x86)/Adobe" "Program Files (x86)\Common Files\Adobe" to "~/.wine/drive_c/Program Files (x86)/Common Files/Adobe" "Documents and Settings\Don\Application Data\Adobe" to "~/.wine/drive_c/users/don/Application Data/Adobe" "Windows\System32\odbcint.dll" to "~/.wine/drive_c/windows/system32/odbcint.dll" ,and lastly "Windows\System32\odbc32.dll" to "~/.wine/drive_c/windows/system32/odbc32.dll". From Terminal, $ wine regedit adobe.reg. Right clicked on Photoshop.exe and selected "Open with Wine". Got the message "Wine Program Crash, Internal errors - invalid parameters received." So to restate my question, How can I get Photoshop running in Ubuntu 12.04? I'm not sold on doing it in this specific way, I just want to use Photoshop without having to reboot. What's the best way to make this happen? Edit: I do not have the installation CD, no.

    Read the article

  • Reinstall ruby (or just yaml lib)

    - by Christian Sciberras
    I've installed ruby 1.9 from source, and tried installing gem 'bundler': < gem install bundler > /usr/local/lib/ruby/1.9.1/yaml.rb:56:in `<top (required)>': > It seems your ruby installation is missing psych (for YAML output). > To eliminate this warning, please install libyaml and reinstall your ruby. > .... I've not been able to cleanly uninstall ruby (wtf?!), and installing libyaml at this point didn't help either. So it seems I've ended up with a fk-ed up server since I can't rollback nor fix the issue. Of course, I do have backups, but this situation is ridiculous nonetheless. Surely there must be a real fix?

    Read the article

  • How to configure remote access to multiple subnets behind a SonicWALL NSA 2400

    - by Kyle Noland
    I have a client that uses a SonicWALL NSA 2400 as their firewall. I need to setup a second LAN subnet for a handful of PC. Management has decided that there should be a second subnet even though intend to allow access across the two subnets - I know... I'm having trouble getting communication across the 2 subnets. I can ping each gateway, but I cannot ping or seem to route traffic fron subnet A to subnet B. Here is my current setup: X0 Interface: LAN zone with IP addres 192.168.1.1 X1 Interface: WAN zone with WAN IP address X2 Interface: LAN zone with IP address 192.168.75.1 I have configured ARP and routes for the secondar subnet (X2) according to this SonicWALL KB article: http://www.sonicwall.com/downloads/supporting_multiple_firewalled_subnets_on_sonicos_enhanced.pdf using "Example 1". At this point I don't minding if I have to throw the SonicWALL GVC software VPN client into the mix to make it work. It feel like I have an Access Rule issue, but for testing I made LAN LAN, WAN LAN and VPN LAN rules wide open with the same results.

    Read the article

  • MATLAB: Best fitness vs mean fitness, initial range

    - by Sa Ta
    Based on the example of Rastrigin's function. At the plot function, if I chose 'best fitness', on the same graph 'mean fitness' will also be plotted. I understand well about 'best fitness' whereby it plots the best function value in each generation versus iteration number. It will reach value zero after some times. I don't understand about 'mean fitness'in the graph plotted. What do those 'mean fitness' values mean? How does the 'mean fitness' graph help to understand Rastrigin's function? What are the meaning of the term initial population, initial score and initial range? I wish to have a better understanding of these terms. The default value for initial range is [0,1]. Does it mean that 0 is the lower bound (lb) and 1 is the upper bound (ub)? Do these values interfere with the lb and ub values I set in the constraints? I try to better understand about lb and ub. If my lb is 0 and ub is 5, does it mean that my final point values will be within 0 and 5? If I know the lb and ub for my problem is between 0 and 5, do I just set the initial range as [0,5] at all times and may I assume that this is the best option for initial range, and I need not try it with any other values?

    Read the article

  • Improve Customer Experience with Real-Time Scheduling

    - by ruth.donohue
    Recently, my husband rearranged his busy work schedule so that he could stay home an entire afternoon to wait for the alarm company to reset the password to our alarm system, only to discover at the end of the afternoon that the field service rep wasn’t going to be able to make the appointment after all. And, the company asked him to reschedule and block off time for another afternoon. Needless to say, my husband wasn’t happy with that experience. Unfortunately, customer experiences like this happen every day. As a business, you can’t afford these types of encounters. It’s too easy for your customers to turn to one of your competitors once they’ve reached the point of frustration. Customer experience and customer loyalty are more important than ever. So how can you prevent something like this from occurring? With the newly available Siebel Field Service Integration with Oracle Real-Time Scheduler, your service organization can: Create cost-optimized plans and schedules to improve operating efficiencies Deliver more accurate ETA’s and shorten appointment windows Minimize the impact of in-day events such as delays on site, sickness, poor weather conditions, and vehicle breakdowns Rather than requiring them to wait for an entire afternoon, imagine asking customers to be available for only an hour. And being able to commit to that time by working around unforeseen events and understanding the impact of delays or re-routings before they become customer issues. What would your customer experience and customer satisfaction be like then? Learn more about the Siebel Field Service Integration with Oracle Real-Time Scheduler: Register for and attend the upcoming webcast on Thursday, March 10th at 8:30 AM Pacific Time Read the press release, data sheet, and solution brief Visit the Siebel Field Service webpage

    Read the article

  • How to enable extended logging for classic asp on IIS7 on Windows 2008 R2

    - by Neil Trodden
    I had to deploy an application that was not written by me onto the above configuration. It is a rather bizarre hybrid of asp.net and classic asp and it's the classic asp that is proving troublesome. The client is having problems with 500 Internal Server Errors appearing and I can see some of these in the logs but I only get the error code and the page name but little else. What I would like to see is the actual error message to at least give me an idea what is going on (or not going on, depending on your point of view) I don't want to display errors in the browser as I don't know the code well enough and this could (for all I know) display some crazy code where the db password is hard-coded into the site.

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? Edit: Mark H summarized it as: "If the claim is that "I have a windows application written in .NET, it should run on mono", then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.".

    Read the article

  • How can I install an Apple Magic Trackpad on a PC without Boot Camp?

    - by rymo
    I have a Apple Magic Trackpad and I'd like to use it with my PC. I have no other Apple hardware besides the Trackpad. I do not have OSX and thus no Boot Camp CD. The Trackpad uses Bluetooth and will pair with Windows 7 without specific drivers (appears as an HID-Compliant Mouse), but all it will do is point and left click (physical click, no touch tap). With Apple's Windows driver update, I should be able to achieve: Tap to click Dragging Drag lock Secondary click Two-finger scrolling Two-finger secondary tap/click But how can I obtain this driver without Boot Camp installed? Apple's Boot Camp update EXE will not install on my PC (non-Apple hardware).

    Read the article

  • Blogging from Office RT

    - by Dennis Vroegop
    During the last Build conference all attendees were given a brand new sparkling exciting Surface RT device (I love that machine despite its name but that's beside the point). On it came a version of Office 2013 RT, or better: the preview version. Now, I translated that term "Preview" to "Beta". Which is OK, since I've been using a lot of beta products from Microsoft and they all were great. And then I wanted to post a blogposting from Word. I knew I could, I have been doing this for a long time (I prefer Live Writer but that isn't available on Windows 8 RT). So I wrote the entry and hit "Publish". Instead of my blogsite I got a nice non-descriptive error telling me I couldn't post. So I fired up my other (Intel based) Win8 tablet, opened Word RT Preview, it loaded my blogpost (you've got to love the automatic synchronization through Skydrive) and tried from that machine. Same error. So, I installed Live Writer (remember, the other machine is Intel based) and posted from there. That worked like a charm. Apparently, there was something wrong with Word. I gave up and didn't think about it anymore. Yet… what you're reading now is written in Word 2013 RT on my Surface RT. So what did do? Simple: I updated from the Preview version to the final version. That's all there was to it. So…. If you're still on the preview I urge you to upgrade. You need to go to the "classic desktop update" window instead of going through the Windows Store App style update since Office is a desktop system, but once you do that you'll have the full version as well. Happy blogging!

    Read the article

  • Mountain Lion fails to connect to Windows share after the connection is interrupted

    - by T Reddy
    I have a Windows 7 share that my Mountain Lion Macbook Pro connects to. The windows share is simply a user account. For whatever reason, when my connection gets interrupted, the mac will show a dialog stating as such and will ask me to ignore or disconnect. From this point forward, I cannot re-establish the connection from the mac to the windows share (even if I reboot the mac). I always have to reboot the windows machine in order for my mac to see the share again. My Windows share is my media center, so I'm not always able to reboot the machine because it is recording TV. Has anybody else encountered this problem and if so how is it resolved?

    Read the article

  • Create 301 Redirection in Amazon Route 53 for Wildcard Subdomains

    - by Eric Yin
    My domain name hosted on Route 53 DNS. Amazon has a guide to do 301 redirection for www. To naked domain by point www. version to a S3 static website with 301 setted up. My question is, how can I have *.domain.com all have 301 redirec to naked domain name. I guess either: Some way to get all wildcard subdomains end up into one S3 bucket, how? Or: Use CloudFront on the www. version S3 site and put wildcard subdomains on the CloudFront, but how? Or: There's some hidden settings just lies on Router 53, then where? Or: use EC2, better not suggest me this, too costing for this task. Please advice.

    Read the article

  • Pointing Domain to VDS Directory

    - by Jonathan Sampson
    I've got a domain name that is managed through 000Domains.com. I also have a virtual dedicated server hosted with GoDaddy.com. Within my VDS, I created a folder /mysite and placed all of my website files there. I can test this through the ipaddress of my VDS, but I would now like to point my domain from 000Domains over to my sub-directory hosted on GoDaddy. How do I do this? Do I need to make any specific modifications to my VDS to inform it that one of the directories will be accessible from a domain name? I have access to Simple Control Panel, if that is of any relevance.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Open source adventures with... wait for it... Microsoft

    - by Jeff
    Last week, Microsoft announced that it was going to open source the rest of the ASP.NET MVC Web stack. The core MVC framework has been open source for a long time now, but the other pieces around it are also now out in the wild. Not only that, but it's not what I call "big bang" open source, where you release the source with each version. No, they're actually committing in real time to a public repository. They're also taking contributions where it makes sense. If that weren't exciting enough, CodePlex, which used to be a part of the team I was on, has been re-org'd to a different part of the company where it is getting the love and attention (and apparently money) that it deserves. For a period of several months, I lobbied to get a PM gig with that product, but got nowhere. A year and a half later, I'm happy to see it finally treated right. In any case, I found a bug in Razor, the rendering engine, before the beta came out. I informally sent the bug info to some people, but it wasn't fixed for the beta. Now, with the project being developed in the open, I was able to submit the issue, and went back and forth with the developer who wrote the code (I met him once at a meet up in Bellevue, I think), and he committed a fix. I tried it a day later, and the bug was gone. There's a lot to learn from all of this. That open source software is surprisingly efficient and often of high quality is one part of it. For me the win is that it demonstrates how open and collaborative processes, as light as possible, lead to better software. In other words, even if this were a project being developed internally, at a bank or something, getting stakeholders involved early and giving people the ability to respond leads to awesomeness. While there is always a place for big thinking, experience has shown time and time again that trying to figure everything out up front takes too long, and rarely meets expectations. This is a lesson that probably half of Microsoft has yet to learn, including the team I was on before I split. It's the reason that team still hasn't shipped anything to general availability. But I've seen what an open and iterative development style can do for teams, at Microsoft and other places that I've worked. When you can have a conversation with people, and take ideas and turn them into code quickly, you're winning. So why don't people like winning? I think there are a lot of reasons, and they can generally be categorized into fear, skepticism and bad experiences. I can't give the Web stack teams enough credit. Not only did they dream big, but they changed a culture that often seems immovable and hopelessly stuck. This is a very public example of this culture change, but it's starting to happen at every scale in Microsoft. It's really interesting to see in a company that has been written off as dead the last decade.

    Read the article

  • Migrating Forms to Java or ADF, the truth and no FUD

    - by Grant Ronald
    The question about migrating Forms to Java (or ADF or APEX) comes up time and time again.  I wanted to pull some core information together in a single blog post to address this question. The first question I always ask is "WHY" - Forms may still be a viable option for you so "if it ain't broke don't fix it".  Bottom line is whatever anyone tells you, its going to be a considerable effort and cost to migrate from Forms to something else so the business is going to want to know WHY you spend all those hard earned dollars switching from something that might have been serving you quite adequately. Second point, if you are going to switch, I would encourage you NOT to look at building a Forms clone.  So many times I see people trying to build an ADF application and EXACTLY mimic the Forms model - ADF is NOT a Forms clone.  You should be building to the sweet spot of your target technology, not your 20 year old client/server technology.  This is also the chance for the business to embrace change, so maybe look at new processes, channels and technology options that weren't available when you first developed your Forms applications. To help you understand what is involved, I've put together a number of resources. Thinking about migration of Forms to Java, ADF or APEX, read this to prepare yourself Oracle Forms to ADF: When, Why and How - this gives you an overview of our vision, directly from Oracle Product Management Redeveloping a Forms Application with Oracle JDeveloper and Oracle ADF.  This is a conference session from myself and Lynn Munsinger on how ADF can be used in a Forms migration/rewrite As someone who manages both Forms and ADF Product Management teams, I've a foot in either camp and am happy to see you use either tool.  However, I want you to be able to make an informed decision.  My hope is that there information sources will help you do that.

    Read the article

  • Windows Firewall: How to allow traffic on port 8080?

    - by Chadworthington
    I am trying to configure team Foundation Server so that 1) it is accessible from within my Home Network 2) and then make the Web site access accessible via the Internet I have a problem with point 1: When I access http://192.168.1.106:8080/tfs/web/ locally from 192.168.1.106, it works. When I access the same web site from another PC in my home network, the abive URL works only if I turn of the Firewall on 192.168.1.106. Can someone please tell me specifically how to allow traffic on port 8080 without turning off Windows Firewall? It seems that the exceptions that I specify are intended for listing programs on the box that need to communicate out. Is IIS the program that I need to make the exception? How do I specify that port 8080 traffic should be allowed for web site traffic on this port? I hope to have success with pt. 2 later but I figure (1) should be done first. I expect issues.

    Read the article

  • Multiple Memcached server /etc/init.d startup script that works ?

    - by p4guru
    I install memcached server via source and can get standard start up script installed for 1 memcached server instance, but trying several scripts via google, can't find one that works to manager auto start up on boot for multiple memcached server instances. I've tried both these scripts and both don't work, service memcached start just returns to command prompt with no memcached server instances started lullabot.com/articles/installing-memcached-redhat-or-centos addmoremem.blogspot.com/2010/09/running-multiple-instances-of-memcached.html However this bash script works but doesn't start up memcached instances at start up though ? #!/bin/sh case "$1" in start) /usr/local/bin/memcached -d -m 16 -p 11211 -u nobody /usr/local/bin/memcached -d -m 16 -p 11212 -u nobody ;; stop) killall memcached ;; esac OS: Centos 5.5 64bit Memcached = v1.4.5 Memcache = v2.2.5 Anyone can point me to a working /etc/init.d/ startup script to manage multiple memcached servers ? Thanks

    Read the article

  • How to disable apache mods without any problems

    - by Saif Bechan
    I have an apache installation where every single mod is enabled. I want my server to be as light as possible so I want to disable everything i do now need. What is the best way to go about this. I know its just removing the ; before the line in the conf file. But what if some hidden service somewhere need that at some random point in time. Can i get some suggestion on what to do.

    Read the article

  • What's a good entity hierarchy for a 2D game?

    - by futlib
    I'm in the process of building a new 2D game out of some code I wrote a while ago. The object hierarchy for entities is like this: Scene (e.g. MainMenu): Contains multiple entities and delegates update()/draw() to each Entity: Base class for all things in a scene (e.g. MenuItem or Alien) Sprite: Base class for all entities that just draw a texture, i.e. don't have their own drawing logic Does it make sense to split up entities and sprites up like that? I think in a 2D game, the terms entity and sprite are somewhat synonymous, right? But I do believe that I need some base class for entities that just draw a texture, as opposed to drawing themselves, to avoid duplication. Most entities are like that. One weird case is my Text class: It derives from Sprite, which accepts either the path of an image or an already loaded texture in its constructor. Text loads a texture in its constructor and passes that to Sprite. Can you outline a design that makes more sense? Or point me to a good object-oriented reference code base for a 2D game? I could only find 3D engine code bases of decent code quality, e.g. Doom 3 and HPL1Engine.

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >