Search Results

Search found 913 results on 37 pages for 'proprietary'.

Page 35/37 | < Previous Page | 31 32 33 34 35 36 37  | Next Page >

  • Avoiding stack overflows in wrapper DLLs

    - by peachykeen
    I have a program to which I'm adding fullscreen post-processing effects. I do not have the source for the program (it's proprietary, although a developer did send me a copy of the debug symbols, .map format). I have the code for the effects written and working, no problems. My issue now is linking the two. I've tried two methods so far: Use Detours to modify the original program's import table. This works great and is guaranteed to be stable, but the user's I've talked to aren't comfortable with it, it requires installation (beyond extracting an archive), and there's some question if patching the program with Detours is valid under the terms of the EULA. So, that option is out. The other option is the traditional DLL-replacement. I've wrapped OpenGL (opengl32.dll), and I need the program to load my DLL instead of the system copy (just drop it in the program folder with the right name, that's easy). I then need my DLL to load the Cg framework and runtime (which relies on OpenGL) and a few other things. When Cg loads, it calls some of my functions, which call Cg functions, and I tend to get stack overflows and infinite loops. I need to be able to either include the Cg DLLs in a subdirectory and still use their functions (not sure if it's possible to have my DLLs import table point to a DLL in a subdirectory) or I need to dynamically link them (which I'd rather not do, just to simplify the build process), something to force them to refer to the system's file (not my custom replacement). The entire chain is: Program loads DLL A (named opengl32.dll). DLL A loads Cg.dll and dynamically links (GetProcAddress) to sysdir/opengl32.dll. I now need Cg.dll to also refer to sysdir/opengl32.dll, not DLL A. How would this be done? Edit: How would this be done easily without using GetProcAddress? If nothing else works, I'm willing to fall back to that, but I'd rather not if at all possible. Edit2: I just stumbled across the function SetDllDirectory in the MSDN docs (on a totally unrelated search). At first glance, that looks like what I need. Is that right, or am I misjudging? (off to test it now) Edit3: I've solved this problem by doing thing a bit differently. Instead of dropping an OpenGL32.dll, I've renamed my DLL to DInput.dll. Not only does it have the advantage of having to export one function instead of well over 120 (for the program, Cg, and GLEW), I don't have to worry about functions running back in (I can link to OpenGL as usual). To get into the calls I need to intercept, I'm using Detours. All in all, it works much better. This question, though, is still an interesting problem (and hopefully will be useful for anyone else trying to do crazy things in the future). Both the answers are good, so I'm not sure yet which to pick...

    Read the article

  • Middleware with generic communication media layer

    - by Tom
    Greetings all, I'm trying to implement middleware (driver) for an embedded device with generic communication media layer. Not sure what is the best way to do it so I'm seeking an advice from more experienced stackoverflow users:). Basically we've got devices around the country communicating with our servers (or a pda/laptop in used in field). Usual form of communication is over TCP/IP, but could be also using usb, RF dongle, IR, etc. The plan is to have object corresponding with each of these devices, handling the proprietary protocol on one side and requests/responses from other internal systems on the other. The thing is how create something generic in between the media and the handling objects. I had a play around with the TCP dispatcher using boost.asio but trying to create something generic seems like a nightmare :). Anybody tried to do something like that? What is the best way how to do it? Example: Device connects to our Linux server. New middleware instance is created (on the server) which announces itself to one of the running services (details are not important). The service is responsible for making sure that device's time is synchronized. So it asks the middleware what is the device's time, driver translates it to device language (protocol) and sends the message, device responses and driver again translates it for the service. This might seem as a bit overkill for such a simple request but imagine there are more complex requests which the driver must translate, also there are several versions of the device which use different protocol, etc. but would use the same time sync service. The goal is to abstract the devices through the drivers to be able to use the same service to communicate with them. Another example: we find out that the remote communications with the device are down. So we send somebody out with PDA, he connects to the device using USB cable. Starts up the application which has the same functionality as the timesync service. Again middleware instance is created (on the PDA) to translate communication between application and the device this time only using USB/serial media not TCP/IP as in previous example. I hope it makes more sense now :) Cheers, Tom

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • HTML5 Video Tag in Chrome - Why is currentTime ignored when video downloaded from my webserver?

    - by larson4
    I want to be able to play back video from a specific time using the HTML5 video tag (and currently only need to worry about Chrome). This is possible by setting the currentTime property of a video element. That works fine in Chrome with a video sourced from html5rocks.com, but is ignored when the same file is loaded from my own local webserver. Using the sample code from http://playground.html5rocks.com/#video_tag, I have arrived at the following HTML: <!DOCTYPE html> <html> <body> <video id="video1" width="320" height="240" volume=".7" controls preload=true autobuffer> <source src="http://playground.html5rocks.com/samples/html5_misc/chrome_japan.webm" type='video/webm; codecs="vp8, vorbis"'/> </video> <input type=button onclick="play()" value="play"> <input type=button onclick="setTime()" value="setTime"> <input type=button onclick="pause()" value="pause"> <script> var play=function() { document.getElementById("video1").play(); } var setTime=function() { document.getElementById("video1").currentTime=2; } var pause=function() { document.getElementById("video1").pause(); } </script> </body> </html> Using Chrome, if you Wait for the video to download for a bit (until the progress bar is about 10% in) Press setTime - you should notice the time jump to 2 seconds press play ...the video will start at 2 seconds in. But if I download the webm file in the source src to my local webserver, and change src to point locally: <source src="chrome_japan.webm" type='video/webm; codecs="vp8, vorbis"'/> ...then setting currentTime is complete ignored. No errors are showing in the developer tools console. Now, my webserver is serving this with mime type "video/webm", but it will be served like any old file -- not streamed. Do I need a webserver that streams in some specific way? What else could be at fault here? Note: The webserver is a proprietary platform and won't have any of the same exact settings or controls one might expect of Tomcat or some other commonly used webserver, though there may be other ways to achieve the same effect.

    Read the article

  • DCVS + hosting for a startup commercial multiplatform phone app

    - by AG
    I'm in lean startup mode, working on a simple phone app that will be published initially as a iThingy app and an Android app with, possibly, Blackberry and Symbian versions to follow. I'm about to go from no repository to needing a central repository that up to 4 very part-time resources will be sharing. Two of us have no version control background, one has used Subversion, and I've used most of the major centralized VCS systems. I'm not going to be pushing the technical limitations of any VCS for a long time; I'm sure that any of the major systems would work fine. And the hosting accounts I've looked at seem reasonable. So I'm really focussed on minimizing the downside risks. That is, I'd like to find a stable setup that is easy to learn in general, easy to use from Windows/Eclipse, and won't paint me into any obvious corners for the next 12 months or so. A quick search of the web had led me to consider the following pairs of DVCS and hosting service, with what I think I'm hearing as their strengths and weaknesses (for my purposes): Bazaar/Launchpad -- My initial choice since I need to get more familiar with this pair for the Google Summer of Code mentoring I'm doing. But, whatever the technical merits, a non-starter for me because they are purely open source, no private repositories plans to purchase that I can see. Git/GitHub -- Git: Fast, light, ultimately flexible, but relatively less Windows friendly, Eclipse plugin (eGit) available but relatively young, GitHub: widely used, pricing is fine Mercurial/BitBucket -- Mercurial: a little less flexible, a little more Windows friendly, Eclipse plugin seems a bit more mature, BitBucket: widely used, pricing is fine, includes a wiki and an issue tracker that we might be able to use instead of something like BaseCamp, at least for a while. Mercurial/BitBucket seem like the winning pair so far for my particular situation; at least two of us are definitely going to be working mostly from Eclipse on Windows and reducing my own learning curve is a priority. ;-) But I have two specific questions: 1) Am I wrong about Bazaar/Launchpad and is there a viable, secure way to use them for proprietary code? 2) Any reason to think that the Mecurial/Bitbucket pair will end up being a headache for my Mac developer, soon, or for Blackberry or Symbian developers a little later? ag

    Read the article

  • Primary language - C++/Qt, C#, Java?

    - by Airjoe
    I'm looking for some input, but let me start with a bit of background (for tl;dr skip to end). I'm an IT major with a concentration in networking. While I'm not a CS major nor do I want to program as a vocation, I do consider myself a programmer and do pretty well with the concepts involved. I've been programming since about 6th grade, started out with a proprietary game creation language that made my transition into C++ at college pretty easy. I like to make programs for myself and friends, and have been paid to program for local businesses. A bit about that- I wrote some programs for a couple local businesses in my senior year in high school. I wrote management systems for local shops (inventory, phone/pos orders, timeclock, customer info, and more stuff I can't remember). It definitely turned out to be over my head, as I had never had any formal programming education. It was a great learning experience, but damn was it crappy code. Oh yeah, by the way, it was all vb6. So, I've used vb6 pretty extensively, I've used c++ in my classes (intro to programming up to algorithms), used Java a little bit in another class (had to write a ping client program, pretty easy) and used Java for some simple Project Euler problems to help learn syntax and such when writing the program for the class. I've also used C# a bit for my own simple personal projects (simple programs, one which would just generate an HTTP request on a list of websites and notify if one responded unexpectedly or not at all, and another which just held a list of things to do and periodically reminded me to do them), things I would've written in vb6 a year or two ago. I've just started using Qt C++ for some undergrad research I'm working on. Now I've had some formal education, I [think I] understand organization in programming a lot better (I didn't even use classes in my vb6 programs where I really should have), how it's important to structure code, split into functions where appropriate, document properly, efficiency both in memory and speed, dynamic and modular programming etc. I was looking for some input on which language to pick up as my "primary". As I'm not a "real programmer", it will be mostly hobby projects, but will include some 'real' projects I'm sure. From my perspective: QtC++ and Java are cross platform, which is cool. Java and C# run in a virtual machine, but I'm not sure if that's a big deal (something extra to distribute, possibly a bit slower? I think Qt would require additional distributables too, right?). I don't really know too much more than this, so I appreciate any help, thanks! TL;DR Am an avocational programmer looking for a language, want quick and straight forward development, liked vb6, will be working with database driven GUI apps- should I go with QtC++, Java, C#, or perhaps something else?

    Read the article

  • How to design service that can provide interface as JAX-WS web service, or via JMS, or as local meth

    - by kevinegham
    Using a typical JEE framework, how do I develop and deploy a service that can be called as a web service (with a WSDL interface), be invoked via JMS messages, or called directly from another service in the same container? Here's some more context: Currently I am responsible for a service (let's call it Service X) with the following properties: Interface definition is a human readable document kept up-to-date manually. Accepts HTTP form-encoded requests to a single URL. Sends plain old XML responses (no schema). Uses Apache to accept requests + a proprietary application server (not servlet or EJB based) containing all logic which runs in a seperate tier. Makes heavy use of a relational database. Called both by internal applications written in a variety of languages and also by a small number of third-parties. I want to (or at least, have been told to!): Switch to a well-known (pref. open source) JEE stack such as JBoss, Glassfish, etc. Split Service X into Service A and Service B so that we can take Service B down for maintenance without affecting Service A. Note that Service B will depend on (i.e. need to make requests to) Service A. Make both services easier for third parties to integrate with by providing at least a WS-I style interface (WSDL + SOAP + XML + HTTP) and probably a JMS interface too. In future we might consider a more lightweight API too (REST + JSON? Google Protocol Buffers?) but that's a nice to have. Additional consideration are: On a smaller deployment, Service A and Service B will likely to running on the same machine and it would seem rather silly for them to use HTTP or a message bus to communicate; better if they could run in the same container and make method calls to each other. Backwards compatibility with the existing ad-hoc Service X interface is not required, and we're not planning on re-using too much of the existing code for the new services. I'm happy with either contract-first (WSDL I guess) or (annotated) code-first development. Apologies if my terminology is a bit hazy - I'm pretty experienced with Java and web programming in general, but am finding it quite hard to get up to speed with all this enterprise / SOA stuff - it seems I have a lot to learn! I'm also not very used to using a framework rather than simply writing code that calls some packages to do things. I've got as far as downloading Glassfish, knocking up a simple WSDL file and using wsimport + a little dummy code to turn that into a WAR file which I've deployed.

    Read the article

  • Primary language - QtC++, C#, Java?

    - by Airjoe
    I'm looking for some input, but let me start with a bit of background (for tl;dr skip to end). I'm an IT major with a concentration in networking. While I'm not a CS major nor do I want to program as a vocation, I do consider myself a programmer and do pretty well with the concepts involved. I've been programming since about 6th grade, started out with a proprietary game creation language that made my transition into C++ at college pretty easy. I like to make programs for myself and friends, and have been paid to program for local businesses. A bit about that- I wrote some programs for a couple local businesses in my senior year in high school. I wrote management systems for local shops (inventory, phone/pos orders, timeclock, customer info, and more stuff I can't remember). It definitely turned out to be over my head, as I had never had any formal programming education. It was a great learning experience, but damn was it crappy code. Oh yeah, by the way, it was all vb6. So, I've used vb6 pretty extensively, I've used c++ in my classes (intro to programming up to algorithms), used Java a little bit in another class (had to write a ping client program, pretty easy) and used Java for some simple Project Euler problems to help learn syntax and such when writing the program for the class. I've also used C# a bit for my own simple personal projects (simple programs, one which would just generate an HTTP request on a list of websites and notify if one responded unexpectedly or not at all, and another which just held a list of things to do and periodically reminded me to do them), things I would've written in vb6 a year or two ago. I've just started using Qt C++ for some undergrad research I'm working on. Now I've had some formal education, I [think I] understand organization in programming a lot better (I didn't even use classes in my vb6 programs where I really should have), how it's important to structure code, split into functions where appropriate, document properly, efficiency both in memory and speed, dynamic and modular programming etc. I was looking for some input on which language to pick up as my "primary". As I'm not a "real programmer", it will be mostly hobby projects, but will include some 'real' projects I'm sure. From my perspective: QtC++ and Java are cross platform, which is cool. Java and C# run in a virtual machine, but I'm not sure if that's a big deal (something extra to distribute, possibly a bit slower? I think Qt would require additional distributables too, right?). I don't really know too much more than this, so I appreciate any help, thanks! TL;DR Am an avocational programmer looking for a language, want quick and straight forward development, liked vb6, will be working with database driven GUI apps- should I go with QtC++, Java, C#, or perhaps something else?

    Read the article

  • Planning to create PDF files in Ruby on Rails

    - by deau
    Hi there, A Ruby on Rails app will have access to a number of images and fonts. The images are components of a visual layout which will be stored separately as a set of rules. The rules specify document dimensions along with which images are used and where. The app needs to take these rules, fetch the images, and generate a PDF that is ready for local printing or emailing. The fonts will also be important. The user needs to customize the layout by inputting text which will be included in the PDF. The PDF must therefore also contain the desired font so that the document renders identically across different machines. Each PDF may have many pages. Each page may have different dimensions but this is not essential. Either way, the ability to manipulate the dimensions and margins given by the PDF is essential. The only thing that needs to be regularly changed is the text. If this is takes too much development then the app can store the layouts in 3rd party PDFs and edit the textual content directly. Eventually though, this will prove too restrictive on the apps intended functionality so I would prefer the app to generate the PDF's itself. I have never worked with PDFs before and, for the most part, I've never had to output anything to the user outside their monitor. A printed medium could require a very different approach to get the best results. If anyone has any advice on how to model the PDF format this it would be really appreciated. The technical aspects of printing such as bleed, resolution and colour have already been factored in to the layouts and images. I am aware that PDF is a proprietary file format and I want to use free or open source software. I have seen a number of Ruby libraries for generating PDF files but because I am new on this scene I have no way to reliably compare them and too little time to implement and test them all. I also have the option of using C to handle this feature and if this is process intensive then that might be preferred. What should I be thinking about and how should I be planning to implement this?

    Read the article

  • Preg_replace regex, newlines, connection resets

    - by bob_the_destroyer
    I have mixed html, custom code, and regular text I need to examine and change frequently on several, long wiki pages. I'm working with a proprietary wiki-like application and have no control over how the application functions or validates user input. The layout of pages that users add must follow a very specific standard layout and always include very specific text in only certain places - a standard which frequently changes. If users add pages that are so far out of the standard, they will be deleted. The fact that all this is obviously a complete waste of time when alternative platforms to do exactly what's needed here exist is already understood. I've built a PHP based API to automate this post-validation and frequent restandardization process for me. I've been able set up regex patterns to handle all this mixed text, and they all work fine for handling single lines. The problem I have is this: Poorly formed regex against long text with line breaks can lead to unexpected results, such as connection resets. I have no access to server-side logs to troubleshoot. How do I overcome this? This is just one example of what I currently have: {column} and {section} tags I'm searching for below can have any number of attributes, and wrap any text. {section} may or may not exist and may or may not be one or more lines under {column}, but it has to be wrapped inside {column}. {column} itself may or may not exist, and if it doesn't, I don't care. I want to grab the inner section contents and wrap it in an html div tag. I can't recall the exact pattern I'm using offhand at the moment, but it's close enough... $pattern = "/\{column:id=summary([|]?([a-zA-Z0-9-_ ]+[:][a-zA-Z0-9-_ ]+[ ]?))\}(.*)({section([|]([a-zA-Z0-9-_ ]+[:][a-zA-Z0-9-_ ]+[ ]?))\}(.*)\{section\}(.*))?{column\}/s"; $replacement = "{html}<div id='summary'$7</div{html}"; $text = preg_replace($pattern, $replacement, $subject); Handling the {column} and {section} attributes and passing only valid HTML parameters to the new html div or a subtext of it is itself a challenge, but my main focus above right now is getting that (.*) value within {section} above without causing a connection reset. Any pointers?

    Read the article

  • [GEEK SCHOOL] Network Security 4: Windows Firewall: Your System’s Best Defense

    - by Ciprian Rusen
    If you have your computer connected to a network, or directly to your Internet connection, then having a firewall is an absolute necessity. In this lesson we will discuss the Windows Firewall – one of the best security features available in Windows! The Windows Firewall made its debut in Windows XP. Prior to that, Windows system needed to rely on third-party solutions or dedicated hardware to protect them from network-based attacks. Over the years, Microsoft has done a great job with it and it is one of the best firewalls you will ever find for Windows operating systems. Seriously, it is so good that some commercial vendors have decided to piggyback on it! Let’s talk about what you will learn in this lesson. First, you will learn about what the Windows Firewall is, what it does, and how it works. Afterward, you will start to get your hands dirty and edit the list of apps, programs, and features that are allowed to communicate through the Windows Firewall depending on the type of network you are connected to. Moving on from there, you will learn how to add new apps or programs to the list of allowed items and how to remove the apps and programs that you want to block. Last but not least, you will learn how to enable or disable the Windows Firewall, for only one type of networks or for all network connections. By the end of this lesson, you should know enough about the Windows Firewall to use and manage it effectively. What is the Windows Firewall? Windows Firewall is an important security application that’s built into Windows. One of its roles is to block unauthorized access to your computer. The second role is to permit authorized data communications to and from your computer. Windows Firewall does these things with the help of rules and exceptions that are applied both to inbound and outbound traffic. They are applied depending on the type of network you are connected to and the location you have set for it in Windows, when connecting to the network. Based on your choice, the Windows Firewall automatically adjusts the rules and exceptions applied to that network. This makes the Windows Firewall a product that’s silent and easy to use. It bothers you only when it doesn’t have any rules and exceptions for what you are trying to do or what the programs running on your computer are trying to do. If you need a refresher on the concept of network locations, we recommend you to read our How-To Geek School class on Windows Networking. Another benefit of the Windows Firewall is that it is so tightly and nicely integrated into Windows and all its networking features, that some commercial vendors decided to piggyback onto it and use it in their security products. For example, products from companies like Trend Micro or F-Secure no longer provide their proprietary firewall modules but use the Windows Firewall instead. Except for a few wording differences, the Windows Firewall works the same in Windows 7 and Windows 8.x. The only notable difference is that in Windows 8.x you will see the word “app” being used instead of “program”. Where to Find the Windows Firewall By default, the Windows Firewall is turned on and you don’t need to do anything special in order for it work. You will see it displaying some prompts once in a while but they show up so rarely that you might forget that is even working. If you want to access it and configure the way it works, go to the Control Panel, then go to “System and Security” and select “Windows Firewall”. Now you will see the Windows Firewall window where you can get a quick glimpse on whether it is turned on and the type of network you are connected to: private networks or public network. For the network type that you are connected to, you will see additional information like: The state of the Windows Firewall How the Windows Firewall deals with incoming connections The active network When the Windows Firewall will notify you You can easily expand the other section and view the default settings that apply when connecting to networks of that type. If you have installed a third-party security application that also includes a firewall module, chances are that the Windows Firewall has been disabled, in order to avoid performance issues and conflicts between the two security products. If that is the case for your computer or device, you won’t be able to view any information in the Windows Firewall window and you won’t be able to configure the way it works. Instead, you will see a warning that says: “These settings are being managed by vendor application – Application Name”. In the screenshot below you can see an example of how this looks. How to Allow Desktop Applications Through the Windows Firewall Windows Firewall has a very comprehensive set of rules and most Windows programs that you install add their own exceptions to the Windows Firewall so that they receive network and Internet access. This means that you will see prompts from the Windows Firewall on occasion, generally when you install programs that do not add their own exceptions to the Windows Firewall’s list. In a Windows Firewall prompt, you are asked to select the network locations to which you allow access for that program: private networks or public networks. By default, Windows Firewall selects the checkbox that’s appropriate for the network you are currently using. You can decide to allow access for both types of network locations or just to one of them. To apply your setting press “Allow access”. If you want to block network access for that program, press “Cancel” and the program will be set as blocked for both network locations. At this step you should note that only administrators can set exceptions in the Windows Firewall. If you are using a standard account without administrator permissions, the programs that do not comply with the Windows Firewall rules and exceptions are automatically blocked, without any prompts being shown. You should note that in Windows 8.x you will never see any Windows Firewall prompts related to apps from the Windows Store. They are automatically given access to the network and the Internet based on the assumption that you are aware of the permissions they require based on the information displayed by the Windows Store. Windows Firewall rules and exceptions are automatically created for each app that you install from the Windows Store. However, you can easily block access to the network and the Internet for any app, using the instructions in the next section. How to Customize the Rules for Allowed Apps Windows Firewall allows any user with an administrator account to change the list of rules and exceptions applied for apps and desktop programs. In order to do this, first start the Windows Firewall. On the column on the left, click or tap “Allow an app or feature through Windows Firewall” (in Windows 8.x) or “Allow a program or feature through Windows Firewall” (in Windows 7). Now you see the list of apps and programs that are allowed to communicate through the Windows Firewall. At this point, the list is grayed out and you can only view which apps, features, and programs have rules that are enabled in the Windows Firewall.

    Read the article

  • WWDC and Tech Ed: A Tale of Two DevCons

    - by andrewbrust
    Next week marks the first full week of June.  Summer will feel in full swing and it will be a pretty big season for technology.  In seeming acknowledgement of that very fact, both Apple and Microsoft will be holding large developers conferences starting Monday.  Apple will hold its annual Worldwide Developers Conference (WWDC) in lovely San Francisco and Microsoft will hold its Tech Ed conference in muggy, oil-laden yet soulful New Orleans.  A brief survey of each show reveals much about the differences in each company’s offerings, strategy, and approach to customers and partners. In the interest of full disclosure, I must explain that I will be speaking at Microsoft’s Tech Ed show, and have done so, on and off, since 2003.  I have never been to an Apple conference and, as readers of this blog may know, I acquired my first ever Apple product 2 months ago when I bought an iPad on the day of that product’s launch.  I think I have keen insights into Microsoft’s conference.  My ability to comment on Apple’s event ranges somewhere between backseat driver and naive observer.  Just so you know. Although both shows cater to their respective company’s developers, there are a number of differences in the events’ purposes and content approaches.  First off, let’s consider each show as a news and PR vehicle.  WWDC will feature Steve Jobs’ keynote address and most likely will be where Apple officially reveals details of its 4th-generation iPhone. Jobs will likely also provide deep background information on the corresponding iPhone OS release.  These presumed announcements will make the show a magnet for the tech press and tech blogger elite.  Apple’s customers will be interested too, especially since the iPhone OS release will likely be made available to owners of existing iPhone, iPod Touch and iPad devices. Tech Ed, on the other hand, may not be especially newsworthy at all.  The keynote address will be given by Bob Muglia, who is President of the company’s Server and Tools Division, and he’ll likely be reviewing things more than previewing them. That’s because the company has, in the last 6-8 months, already released new versions of a majority of its products, including Windows, Office, SharePoint, SQL Server, Exchange, its Azure cloud platform, its .NET software development layer, its Silverlight Rich Internet Application (RIA) technology and its Visual Studio developer suite.  Redmond’s product pipeline has functioned more like a firehose of late, and the company has a ton of work to do to get developers up to speed on everything that’s new. I know I keep saying “developers,” but in Tech Ed’s case, that’s not really accurate.  In North America, Tech Ed caters to both developers and IT pros (i.e. technologists who work with physical IT infrastructure, as well as security and administration of the server software that runs on it).  This pairing has, since its inception, struck some as anomalous and others, including many exhibitors, as very smart. Certainly, it means Tech Ed ends up being a confab for virtually all professionals in Microsoft’s ecosystem.  And this year, Microsoft’s Business Intelligence (BI) conference will be co-located with Tech Ed, further enhancing that fusion effect. Clearly then, Microsoft’s show will focus on education, as its name assures us.  Apple’s will serve as both a press event and an opportunity to get its own App Store developer channel synced up with its newest technology advances.  For example, we already know that iPhone OS 4.0 will provide for a limited multitasking capability; that will only work well if people know how to code to it in a capable way.  Apple also told us its iAd advertising platform will be part of the new OS, and Steve Jobs insists that’s to provide a revenue opportunity for developers.  This too, then, needs to be explicated and soaked up buy the faithful. A look at each show’s breakout session lineup provides some interesting takeaways.  WWDC will have very few Mac-specific sessions on offer, and virtually no sessions that at are IT- or “Enterprise-“ related.  It’s all about the phone, music players and tablets.  However, WWDC will have plenty of low-level, hardcore tech coverage of such things as Advanced Memory Analysis and Creating Secure Applications, as well as lots of rich media-related content like Core Animation and Game Design and Development.  Beyond Apple’s proprietary platform, WWDC will also feature an array of sessions on HTML 5 and other Web standards.  In all, WWDC offers over 100 technical sessions and hands-on labs. What about Tech Ed’s editorial content?  Like the target audience, it really runs the gamut.  The show has 21 tracks (versus WWDC’s 5) and more than 745 “learning opportunities” which include breakout sessions, demo stations, hands-on labs and BIrds of a Feather discussion sessions.  Topics range from Architecture talks like Patterns of Parallel Programming to cloud computing talks like Building High Capacity Compute Applications with Windows Azure to IT-focused topics like Virtualization of Microsoft SharePoint 2010 Farm Architecture.  I also count 19 sessions on Windows Phone 7.  Unfortunately, with regard to Web standards and HTML 5, only a few sessions are offered, all of them specific to Internet Explorer. All-in-all, Apple’s show looks more exciting and “sexier” than Tech Ed. Microsoft’s show seems a lot more enterprise-focused than WWDC. This is, of course, well in sync with each company’s approach and products.  Microsoft’s content is much wider ranging and bests WWDC in sheer volume of sessions and labs.  I suppose some might argue that less is more; others that Apple’s consumer-focused offerings simply don’t provide for the same depth of coverage to a business audience.  Microsoft has a serious focus on the cloud and  a paucity of coverage on client-side Web standards; Apple has virtually no cloud offering at all.  Again, this reflects each tech titan’s go-to-market strategy. My own take is that employees of each company should attend the other’s event.  The amount of mutual exclusivity in content may make sense in terms of corporate philosophy, but the reality is that each company could stand to diversify into the other’s territory, at least somewhat. My own talk at Tech Ed will focus on competitive analysis around Microsoft’s BI products.  Apple does not today figure into that analysis. Maybe one day it will.

    Read the article

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    Hello. I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so I reset the machine. There must be a way round this. Any help to set up a dual head config for Ubuntu using Config #1 would be hugely appreciated. TIA, Mike

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

  • 13.10 - Weird WiFi connection problems - WMP300N - Broadcom BCM4321

    - by user1898041
    Just installed 13.10 on my desktop and I really like it. After having problems with getting the wifi to work, I installed it connected to the internet with an ethernet cable and added in the 3rd party software and updates as per the installation procedure. After installation was completed, I saw the wifi icon in the upper right hand corner, but it was not seeing any wifi networks. Some Googling brought me to use the 'Additional Drivers' application. It found the WMP300N Broadcom BDM4321 based pci wifi card and installed the proprietary Broadcom STA wireless driver, which may have been installed before. I'm not sure. Here is the weird part: when I start my system, wifi seems to be in some sort of suspended state where the system sees that the card exists but the card will not detect any wifi networks. It will work after booting once I 'Additional Drivers' application and then start FireFox. I know it seems weird, but this is the process I've got down to get the card to recognize wifi networks. After those applications are open for a few seconds, the card starts to function like normal (although maintaining the wifi connection is problem but most likely a seperate issue). The reason this is a problem is because this is supposed to just be a headless box managed through SSH. Here are the readouts from the common network diagnosis programs BEFORE I open 'Additional Drivers' and 'FireFox'. All commands were done with sudo. lspci 00:00.0 Host bridge: Intel Corporation 82G35 Express DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0) 03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller 05:00.0 Network controller: Broadcom Corporation BCM4321 802.11b/g/n (rev 01) 05:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) - lshw *-network description: Ethernet interface product: Attansic L1 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: b0 serial: 00:22:15:00:a8:12 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1 driverversion=2.1.3 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:feac0000-feafffff memory:feaa0000-feabffff *-network description: Wireless interface product: BCM4321 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 01 serial: 00:23:69:d8:2b:16 width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.30.223.141 (r415941) latency=64 multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:febfc000-febfffff - ifconfig eth0 Link encap:Ethernet HWaddr 00:22:15:00:a8:12 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:23:69:d8:2b:16 inet6 addr: fe80::223:69ff:fed8:2b16/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1856 (1.8 KB) TX bytes:1856 (1.8 KB) - iwconfig eth1 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=200 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off - iwlist scan eth1 No scan results - Here are the various commands AFTER I open 'Additional Drivers' and 'FireFox' lspci 00:00.0 Host bridge: Intel Corporation 82G35 Express DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0) 03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller 05:00.0 Network controller: Broadcom Corporation BCM4321 802.11b/g/n (rev 01) 05:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) - lshw *-network description: Ethernet interface product: Attansic L1 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: b0 serial: 00:22:15:00:a8:12 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1 driverversion=2.1.3 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:feac0000-feafffff memory:feaa0000-feabffff *-network description: Wireless interface product: BCM4321 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 01 serial: 00:23:69:d8:2b:16 width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.30.223.141 (r415941) ip=192.168.1.103 latency=64 multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:febfc000-febfffff - ifconfig eth0 Link encap:Ethernet HWaddr 00:22:15:00:a8:12 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:23:69:d8:2b:16 inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::223:69ff:fed8:2b16/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:85 errors:0 dropped:0 overruns:0 frame:11901 TX packets:132 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:52641 (52.6 KB) TX bytes:19058 (19.0 KB) Interrupt:16 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:76 errors:0 dropped:0 overruns:0 frame:0 TX packets:76 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6084 (6.0 KB) TX bytes:6084 (6.0 KB) - iwconfig eth1 IEEE 802.11abg ESSID:"BU" Mode:Managed Frequency:2.447 GHz Access Point: 00:26:F2:1F:81:02 Bit Rate=54 Mb/s Tx-Power=200 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=59/70 Signal level=-51 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 - iwlist scan A LOT OF SSIDs FOUND! - I'd like to have this problem fixed, but I'm not quite sure where to go. Been Googling a lot and can't seem to find anyone else with this problem.

    Read the article

  • Best of Breed vs. Suite – Oracle’s SaaS Delivers Both

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} The debate of which is better: “best of breed” business applications vs. an integrated suite is certainly not a new conversation. This has been argued between IT vendors and CIOs for years. It’s also important to clarify that “best of breed” does not necessarily translate into being the richest functionality; rather it’s often about just having the best fit solution to solve a specific business problem or need. So what does cloud have to do with the niche vs. suite debate? Consuming business applications in a cloud or SaaS deployment model can change the best of breed vs. suite discussion - if the cloud is done right. It’s having your cake and eating it too only better: you don’t have to gather all the ingredients or wait to bake your cake, and you can adjust how big of slice you take. Before you eat, it’s worth pausing to recall much of what we learned about IT over the last decade. These basic IT principles still hold true even though the financial model has changed from buying to renting. In other words, what’s under the technology hood still matters. Architecture and development methodologies like building an application based on open standards so it works with other systems - is still important. Data and information silos, complex integrations, and proprietary technologies that lock you in, are still bad. While some may argue that IT no longer matters with cloud, the opposite is actually true. If anything cloud can help return IT back to its rightful place as key strategic asset vs. a liability on the balance sheet. The “I” in CIO was never meant to stand for “integration” yet it’s amazing how much time and money is poured into these types of initiatives for most organizations each year. Rather the “I” needs to stand for “innovation”. This is where Oracle SaaS can uniquely help. Oracle’s application strategy has not really changed over the years. It’s always been about bringing the best and richest functionality across the enterprise to our customers while leveraging a common, standards-based, and enterprise-grade platform. So not jut best fit, but the best capabilities based on the input of thousands of enterprise customers across the globe. Oracle invests billions in R&D every year to add new capabilities to the broadest cloud portfolio in the industry, spanning across functional pillars like CRM, HCM, ERP, etc. And where it makes sense, Oracle combines key strategic acquisitions to complement organic functionality. The result is best of breed delivered in a suite. Again this is not something new. The game changer now with cloud is that it impacts HOW Oracle customers adopt the richest, most modern applications across the business – and continue on getting it. Consuming oracle applications in the cloud means you can adopt new capabilities and updates very quickly and easily. There’s no hardware to buy or software to manage. Oracle does it for you. Low upfront costs and an OpEx financial model is the easy part. Oracle Cloud Applications take it a big step further. For organizations that demand having the latest and richest functionality and accelerating the time to value from their IT investment, Oracle Cloud is the right path. It’s about holistically changing the “hows” and the “whys” of the organization by leveraging transformational innovations like social, mobile, and big data in a consistent and more powerful way. Not just about sales force automation or talent management. These technologies should impact all parts of the company and Oracle Cloud is the enterprise-grade delivery vehicle. Oracle SaaS helps break down barriers of adoption and is eases the headache of upgrades, investing in new supporting hardware, or adding internal expertise to manage it all. With Oracle Cloud, customers can get best of breed capabilities in either a full suite model or a la carte. And because it’s entirely built on open standards, it’s built to co-exist with existing IT investments. Updates can be automatic or delayed based on a customer’s requirements. And it’s complete – a full suite of cross pillar functionality. Even better, if you don’t like it, need more or less, just turn the dial up or down. Just like your utility bill, you pay for what you use, and can consume more or less power whenever you need it. Lower cost, lower investment risk, without compromising on functionality, security, or performance. Technology still matters in the cloud. So our cloud customers also like that when they adopt our cloud applications, they also get the best underlying technology, from the middleware and database platform down to infrastructure and Oracle’s engineered systems. Therefore it’s not just the greatest and latest in application functionality, but everything underneath that makes it work is also the latest and greatest. The best of breed technology stack powering best of breed business applications, and all delivered in a subscription based model. The best of both worlds. Yep, that’s the idea.

    Read the article

  • The APEX of Business Value...or...the Business Value of APEX? Oracle Cloud Takes Oracle APEX to New Heights!

    - by Gene Eun
    The attraction of Oracle Application Express (APEX) has increased tremendously with the recent launch of the Oracle Cloud. APEX already supported departmental development and deployment of business applications with minimal involvement from the IT department. Positioned as the ideal replacement for MS Access, APEX probably has managed better to capture the eye of developers and was used for enterprise application development at least as much as for the kind of tactical applications that Oracle strategically positioned it for. With APEX as PaaS from the Oracle Cloud, a leap is made to a much higher level of business value. Now the IT department is not even needed to make infrastructure available with a database running  on it. All the business needs is a credit card. And the business application that is developed, managed and used from the cloud through a standard browser can now just as easily be accessed by users from around the world as by users from the business department itself. As a bonus – the development of the APEX application is also done in the cloud – with no special demands on the location or the enterprise access privileges of the developers. To sum it up: APEX from Oracle Cloud Database Service get the development environment up and running in minutes no involvement from the internal IT department required (not for infrastructure, platform, or development) superior availability and scalability is offered by Oracle users from anywhere in the world can be invited to access the application developers from anywhere in the world can participate in creating and maintaining the application In addition: because the Oracle Cloud platform is the same as the on-premise platform, you can still decide to move the APEX application between the cloud and the local environment – and back again. The REST-ful services that are available through APEX allow programmatic interaction with the database under the APEX application. That means that this database can be synchronized with on premise databases or data stores in (other) clouds. Through the Oracle Cloud Messaging Service, the APEX application can easily enter into asynchronous conversations with other APEX applications, Fusion Middleware applications (ADF, SOA, BPM) and any other type of REST-enabled application. In my opinion, now, for the first time perhaps, APEX offers the attraction to the business that has been suggested before: because of the cloud, all the business needs is  a credit card (a budget of $175 per month), an internet-connection and a browser. Not like before, with a PC hidden under a desk or a database running somewhere in the data center. No matter how unattended: equipment is needed, power is consumed, the database needs to be kept running and if Oracle Database XE does not suffice, software licenses are required as well. And this set up always has a security challenge associated with it. The cloud fee for the Oracle Cloud Database Service includes infrastructure, power, licenses, availability, platform upgrades, a collection of reusable application components and the development and runtime environments containing the APEX platform. Of course this not only means that business departments can move quickly without having to convince their IT colleagues to move along – it also means that small organizations that do not even have IT colleagues can do the same. Getting tailored applications or applications up and running to get in touch with users and customers all over the world is now within easy reach for small outfits – without any investment. My misunderstanding For a long time, I was under the impression that the essence of APEX was that the business could create applications themselves – meaning that business ‘people’ would actually go into APEX to create the application. To me APEX was too much of a developers’ tool to see that happen – apart from the odd business analyst who missed his or her calling as an IT developer. Having looked at various other cloud based development offerings – including Force.com, Mendix, WaveMaker, WorkXpress, OrangeScape, Caspio and Cordys- I have come to realize my mistake. All these platforms are positioned for 'the business' but require a fair amount of coding and technical expertise. However, they make the business happy nevertheless, because they allow the  business to completely circumvent the IT department. That is the essence. Not having to go through the red tape, not having to wait for IT staff who (justifiably) need weeks or months to provide an environment, not having to deal with administrators (again, justifiably) refusing to take on that 'strange environment'. Being able to think of an initiative and turn into action right away. The business does not have to build the application - it can easily hire some external developers or even that nerdy boy next door. They can get started, get an application up and running and invite users in – especially external users such as customers. They will worry later about upgrades and life cycle management and integration. To get applications up and running quickly and start turning ideas into action and results rightaway. That is the key selling point for all these cloud offerings, including APEX from the Cloud. And it is a compelling story. For APEX probably even more so than for the others. While I consider APEX a somewhat proprietary framework compared with ‘regular’ Java/JEE web development (or even .NET and PHP  development), it is still far more open than most cloud environments. APEX is SQL and PL/SQL based – nothing special about those languages – and can run just as easily on site as in the cloud. It has been around since 2004 (that is not including several predecessors that fed straight into APEX) so it can be considered pretty mature. Oracle as a company seems pretty stable – so investments in its technology are bound to last for some time to come. By the way: neither APEX nor the other Cloud DevaaS offerings are targeted at creating applications with enormous life times. They fit into a trend of agile development and rapid life cycle management, with fairly light weight user interfaces that quickly adapt to taste, technology trends and functional requirements and that are easily replaced. APEX and ADF – a match made in heaven?! (or at least in the sky) Note that using APEX only for cloud based database with REST-ful Services is also a perfectly viable scenario: any UI – mobile or browser based – capable of consuming REST-ful services can be created against such a business tier. Creating an ADF Mobile application for example that runs aginst REST-ful services is a best practice for mobile development. Such REST-ful services can be consumed from any service provider – including the Cloud based APEX powered REST-ful services running against the Oracle Cloud Database Service! The ADF Mobile architecture overview can easily be morphed to fit the APEX services in – allowing for a cloud based mobile app: Want to learn more about Oracle Database Cloud Service or Oracle Cloud, just visit cloud.oracle.com  or oracle.com/cloud. Repost of a blog entry by Rick Greenwald, Director of Product Management, Oracle Database Cloud Service.

    Read the article

  • How to configure VPN in Windows XP

    - by SAMIR BHOGAYTA
    VPN Overview A VPN is a private network created over a public one. It’s done with encryption, this way, your data is encapsulated and secure in transit – this creates the ‘virtual’ tunnel. A VPN is a method of connecting to a private network by a public network like the Internet. An internet connection in a company is common. An Internet connection in a Home is common too. With both of these, you could create an encrypted tunnel between them and pass traffic, safely - securely. If you want to create a VPN connection you will have to use encryption to make sure that others cannot intercept the data in transit while traversing the Internet. Windows XP provides a certain level of security by using Point-to-Point Tunneling Protocol (PPTP) or Layer Two Tunneling Protocol (L2TP). They are both considered tunneling protocols – simply because they create that virtual tunnel just discussed, by applying encryption. Configure a VPN with XP If you want to configure a VPN connection from a Windows XP client computer you only need what comes with the Operating System itself, it's all built right in. To set up a connection to a VPN, do the following: 1. On the computer that is running Windows XP, confirm that the connection to the Internet is correctly configured. • You can try to browse the internet • Ping a known host on the Internet, like yahoo.com, something that isn’t blocking ICMP 2. Click Start, and then click Control Panel. 3. In Control Panel, double click Network Connections 4. Click Create a new connection in the Network Tasks task pad 5. In the Network Connection Wizard, click Next. 6. Click Connect to the network at my workplace, and then click Next. 7. Click Virtual Private Network connection, and then click Next. 8. If you are prompted, you need to select whether you will use a dialup connection or if you have a dedicated connection to the Internet either via Cable, DSL, T1, Satellite, etc. Click Next. 9. Type a host name, IP or any other description you would like to appear in the Network Connections area. You can change this later if you want. Click Next. 10. Type the host name or the Internet Protocol (IP) address of the computer that you want to connect to, and then click Next. 11. You may be asked if you want to use a Smart Card or not. 12. You are just about done, the rest of the screens just verify your connection, click Next. 13. Click to select the Add a shortcut to this connection to my desktop check box if you want one, if not, then leave it unchecked and click finish. 14. You are now done making your connection, but by default, it may try to connect. You can either try the connection now if you know its valid, if not, then just close it down for now. 15. In the Network Connections window, right-click the new connection and select properties. Let’s take a look at how you can customize this connection before it’s used. 16. The first tab you will see if the General Tab. This only covers the name of the connection, which you can also rename from the Network Connection dialog box by right clicking the connection and selecting to rename it. You can also configure a First connect, which means that Windows can connect the public network (like the Internet) before starting to attempt the ‘VPN’ connection. This is a perfect example as to when you would have configured the dialup connection; this would have been the first thing that you would have to do. It's simple, you have to be connected to the Internet first before you can encrypt and send data over it. This setting makes sure that this is a reality for you. 17. The next tab is the Options Tab. It is The Options tab has a lot you can configure in it. For one, you have the option to connect to a Windows Domain, if you select this check box (unchecked by default), then your VPN client will request Windows logon domain information while starting to work up the VPN connection. Also, you have options here for redialing. Redial attempts are configured here if you are using a dial up connection to get to the Internet. It is very handy to redial if the line is dropped as dropped lines are very common. 18. The next tab is the Security Tab. This is where you would configure basic security for the VPN client. This is where you would set any advanced IPSec configurations other security protocols as well as requiring encryption and credentials. 19. The next tab is the Networking Tab. This is where you can select what networking items are used by this VPN connection. 20. The Last tab is the Advanced Tab. This is where you can configure options for configuring a firewall, and/or sharing. Connecting to Corporate Now that you have your XP VPN client all set up and ready, the next step is to attempt a connection to the Remote Access or VPN server set up at the corporate office. To use the connection follow these simple steps. To open the client again, go back to the Network Connections dialog box. 1. One you are in the Network Connection dialog box, double-click, or right click and select ‘Connect’ from the menu – this will initiate the connection to the corporate office. 2. Type your user name and password, and then click Connect. Properties bring you back to what we just discussed in this article, all the global settings for the VPN client you are using. 3. To disconnect from a VPN connection, right-click the icon for the connection, and then click “Disconnect” Summary In this article we covered the basics of building a VPN connection using Windows XP. This is very handy when you have a VPN device but don’t have the ‘client’ that may come with it. If the VPN Server doesn’t use highly proprietary protocols, then you can use the XP client to connect with. In a future article I will get into the nuts and bolts of both IPSec and more detail on how to configure the advanced options in the Security tab of this client. 678: The remote computer did not respond. 930: The authentication server did not respond to authentication requests in a timely fashion. 800: Unable to establish the VPN connection. 623: The system could not find the phone book entry for this connection. 720: A connection to the remote computer could not be established. More on : http://www.windowsecurity.com/articles/Configure-VPN-Connection-Windows-XP.html

    Read the article

  • Taking the training wheels off: Accelerating the Business with Oracle IAM by Brian Mozinski (Accenture)

    - by Greg Jensen
    Today, technical requirements for IAM are evolving rapidly, and the bar is continuously raised for high performance IAM solutions as organizations look to roll out high volume use cases on the back of legacy systems.  Existing solutions were often designed and architected to support offline transactions and manual processes, and the business owners today demand globally scalable infrastructure to support the growth their business cases are expected to deliver. To help IAM practitioners address these challenges and make their organizations and themselves more successful, this series we will outline the: • Taking the training wheels off: Accelerating the Business with Oracle IAM The explosive growth in expectations for IAM infrastructure, and the business cases they support to gain investment in new security programs. • "Necessity is the mother of invention": Technical solutions developed in the field Well proven tricks of the trade, used by IAM guru’s to maximize your solution while addressing the requirements of global organizations. • The Art & Science of Performance Tuning of Oracle IAM 11gR2 Real world examples of performance tuning with Oracle IAM • No Where to go but up: Extending the benefits of accelerated IAM Anything is possible, compelling new solutions organizations are unlocking with accelerated Oracle IAM Let’s get started … by talking about the changing dynamics driving these discussions. Big Companies are getting bigger everyday, and increasingly organizations operate across state lines, multiple times zones, and in many countries or continents at the same time.  No longer is midnight to 6am a safe time to take down the system for upgrades, to run recon’s and import or update user accounts and attributes.  Further IT organizations are operating as shared services with SLA’s similar to telephone carrier levels expected by their “clients”.  Workers are moved in and out of roles on a weekly, daily, or even hourly rate and IAM is expected to support those rapid changes.  End users registering for services during business hours in Singapore are expected their access to be green-lighted in custom apps hosted in Portugal within the hour.  Many of the expectations of asynchronous systems and batched updates are not adequate and the number and types of users is growing. When organizations acted more like independent teams at functional or geographic levels it was manageable to have processes that relied on a handful of people who knew how to make things work …. Knew how to get you access to the key systems to get your job done.  Today everyone is expected to do more with less, the finance administrator previously supporting their local Atlanta sales office might now be asked to help close the books for the Johannesburg team, and access certification process once completed monthly by Joan on the 3rd floor is now done by a shared pool of resources in Sao Paulo.   Fragmented processes that rely on institutional knowledge to get access to systems and get work done quickly break down in these scenarios.  Highly robust processes that have automated workflows for connected or disconnected systems give organizations the dynamic flexibility to share work across these lines and cut costs or increase productivity. As the IT industry computing paradigms continue to change with the passing of time, and as mature or proven approaches become clear, it is normal for organizations to adjust accordingly. Businesses must manage identity in an increasingly hybrid world in which legacy on-premises IAM infrastructures are extended or replaced to support more and more interconnected and interdependent services to a wider range of users. The old legacy IAM implementation models we had relied on to manage identities no longer apply. End users expect to self-request access to services from their tablet, get supervisor approval over mobile devices and email, and launch the application even if is hosted on the cloud, or run by a partner, vendor, or service provider. While user expectations are higher, they are also simpler … logging into custom desktop apps to request approvals, or going through email or paper based processes for certification is unacceptable.  Users expect security to operate within the paradigm of the application … i.e. feel like the application they are using. Citizen and customer facing applications have evolved from every where, with custom applications, 3rd party tools, and merging in from acquired entities or 3rd party OEM’s resold to expand your portfolio of services.  These all have their own user stores, authentication models, user lifecycles, session management, etc.  Often the designers/developers are no longer accessible and the documentation is limited.  Bringing together underlying directories to scale for growth, and improve user experience is critical for revenue … but also for operations. Job functions are more dynamic.... take the Olympics for example.  Endless organizations from corporations broadcasting, endorsing, or marketing through the event … to non-profit athletic foundations and public/government entities for athletes and public safety, all operate simultaneously on the world stage.  Each organization needs to spin up short-term teams, often dealing with proprietary information from hot ads to racing strategies or security plans.  IAM is expected to enable team’s to spin up, enable new applications, protect privacy, and secure critical infrastructure.  Then it needs to be disabled just as quickly as users go back to their previous responsibilities. On a more technical level … Optimized system directory; tuning guidelines and parameters are needed by businesses today. Business’s need to be making the right choices (virtual directories) and considerations via choosing the correct architectural patterns (virtual, direct, replicated, and tuning), challenge is that business need to assess and chose the correct architectural patters (centralized, virtualized, and distributed) Today's Business organizations have very complex heterogeneous enterprises that contain diverse and multifaceted information. With today's ever changing global landscape, the strategic end goal in challenging times for business is business agility. The business of identity management requires enterprise's to be more agile and more responsive than ever before. The continued proliferation of networking devices (PC, tablet, PDA's, notebooks, etc.) has caused the number of devices and users to be granted access to these devices to grow exponentially. Business needs to deploy an IAM system that can account for the demands for authentication and authorizations to these devices. Increased innovation is forcing business and organizations to centralize their identity management services. Access management needs to handle traditional web based access as well as handle new innovations around mobile, as well as address insufficient governance processes which can lead to rouge identity accounts, which can then become a source of vulnerabilities within a business’s identity platform. Risk based decisions are providing challenges to business, for an adaptive risk model to make proper access decisions via standard Web single sign on for internal and external customers,. Organizations have to move beyond simple login and passwords to address trusted relationship questions such as: Is this a trusted customer, client, or citizen? Is this a trusted employee, vendor, or partner? Is this a trusted device? Without a solid technological foundation, organizational performance, collaboration, constituent services, or any other organizational processes will languish. A Single server location presents not only network concerns for distributed user base, but identity challenges. The network risks are centered on latency of the long trip that the traffic has to take. Other risks are a performance around availability and if the single identity server is lost, all access is lost. As you can see, there are many reasons why performance tuning IAM will have a substantial impact on the success of your organization.  In our next installment in the series we roll up our sleeves and get into detailed tuning techniques used everyday by thought leaders in the field implementing Oracle Identity & Access Management Solutions.

    Read the article

  • Underwriting in a New Frontier: Spurring Innovation

    - by [email protected]
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Susan Keuer, product strategy manager for Oracle Insurance, shares her experiences and insight from the 2010 Association of Home Office Underwriters (AHOU) Annual Conference, April 11-14, in San Antonio, Texas    How can I be more innovative in underwriting?  It's a common question I hear from insurance carriers, producers and others, so it was no surprise that it was the key theme at the recent 2010 AHOU Annual Conference.  This year's event drew more than 900 insurance professionals involved in the underwriting process across life and annuities, property and casualty and reinsurance from around the globe, including the U.S., Canada, Australia, Bahamas, and more, to San Antonio - a Texas city where innovation transformed a series of downtown drainage canals into its premiere River Walk tourist destination.   CNN's Medical Correspondent Dr. Sanjay Gupta kicked off the conference with a phenomenal opening session that drove home the theme of the conference, "Underwriting in a New Frontier:  Spurring Innovation."   Drawing from his own experience as a neurosurgeon treating critically injured medical patients in the field in Iraq, Gupta inspired audience members to think outside the box during the underwriting process. He shared a compelling story of operating on a soldier who had suffered a head-related trauma in a field hospital.  With minimal supplies available Gupta used a Black and Decker saw to operate on the soldier's head and reduce pressure on his swelling brain. Drawing from this example, Gupta encouraged underwriters to think creatively, be innovative, and consider new tools and sources of information, such as social networking sites, during the underwriting process. So as you are looking at risk take into consideration all resources you have available.    Gupta also stressed the concept of IKIGAI - noting that individuals who believe that their life is worth living are less likely to die than are their counterparts without this belief.  How does one quantify this approach to life or thought process when evaluating risk?  Could this be something to consider as a "category" in the near future? How can this same belief in your own work spur innovation?   The role of technology was a hot topic of discussion throughout the conference.  Sessions delved into the latest in underwriting software to the rise of social media and how it is being increasingly integrated into underwriting process and solutions.  In one session a trio of panelists representing the carrier, producer and vendor communities stressed the importance to underwriters of leveraging new technology and the plethora of online information sources, which all could be used to accurately, honestly and consistently evaluate the risk throughout the underwriting process.   Another focused on the explosion of social media noting:  1.    Social media is growing exponentially - About eight percent of Americans used social media five years ago. Today about 46 percent of Americans do so, with 85 percent of financial services professionals using social media in their work.  2.    It will impact your business - Underwriters reconfirmed over and over that they are increasingly using "free" tools that are available in cyberspace in lieu of more costly solutions, such as inspection reports conducted by individuals in the field.  3.    Information is instantly available on the Web, anytime, anywhere - LinkedIn was mentioned as a way to connect to peers in the underwriting community and producers alike.  Many carriers and agents also are using Facebook to promote their company to customers - and as a point-of-entry to allow them to perform some functionality - such as accessing product marketing information versus directing users to go to the carrier's own proprietary website.  Other carriers have released their tight brand marketing to allow their producers to drive more business to their personal Facebook site where they offer innovative tools such as Application Capture or asking medical information in a more relaxed fashion.     Other key topics at the conference included the economy, ongoing industry consolidation, real-estate valuations as an asset and input into the underwriting process, and producer trends.  All stressed a "back to basics" approach for low cost, term products.   Finally, Connie Merritt, RN, PHN, entertained the large group of atttendees with audience-engaging insight on how to "Tame the Lions in Your Life - Dealing with Complainers, Bullies, Grump and Curmudgeon." Merritt noted "we are too busy for our own good." She shared how her overachieving personality had impacted her life.  Audience members then were asked to pick red, yellow, blue, or green shapes, without knowing that each one represented a specific personality trait.  For example, those who picked blue were the peacemakers. Those who choose yellow were social - the hint was to "Be Quiet Longer."  She then offered these "lion taming" steps:   1.    Admit It 2.    Accept It 3.    Let Go 4.    Be Present (which paralleled Gupta's IKIGAI concept)   When thinking about underwriting I encourage you to be present in the moment and think creatively, but don't be afraid to look ahead to the future and be an innovator.  I hope to see you at next year's AHOU Annual Conference, May 1-4, 2011 at The Mirage in Las Vegas, Nev.     Susan Keuer is the product strategy manager for new business underwriting.  She brings more than 20 years of insurance industry experience working with leading insurance carriers and technology companies to her role on the product strategy team for life/annuities solutions within the Oracle Insurance Global Business Unit  

    Read the article

  • Linux bcm43224 wifi adapter slows down a couple minutes after boot

    - by Blubber
    I just installed Ubuntu on my mid 2012 MacBook Air. Everything worked out of the box, but the wifi is showing some weird behavior. When I first login it's really fast, loading google.com is near instant, and browsing in general feels at least as smooth as it did on Mac OS. However, after a couple minutes the connection slows down dramatically, sometimes it takes over 5s to load google.com, a simple reboot fixes the problem for another couple minutes. Specs: Wifi: 02:00.0 Network controller: Broadcom Corporation BCM43224 802.11a/b/g/n (rev 01) Driver: open-source brcmsmac driver Kernel: Linux wega 3.8.0-21-generic #32-Ubuntu SMP Tue May 14 22:16:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Distro: Ubuntu 13.04 (uptodate) I tried a number of things, none of which actually helped Use proprietary sta driver from broadcom Installed firmware into /lib/firmware/brcms (which, as far as I can tell from logs, does not get loaded at all) Switch router to only use 2.4 OR 5 GHz Set router to only use a OR g OR n Set router to use AES encryption only Turned off power management on the adapter Set regulatory region to the correct value (NL) on both router and laptop Disable ipv6 Nothing seems to help, the slowdown always occurs. I did notice that the latency (ping google.com) stays roughly the same (around 9ms). Below is some more information that might be of use. $ lspci -nnk | grep -iA2 net 02:00.0 Network controller [0280]: Broadcom Corporation BCM43224 802.11a/b/g/n [14e4:4353] (rev 01) Subsystem: Apple Inc. Device [106b:00e9] Kernel driver in use: bcma-pci-bridge $ rfkill list 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no $ lsmod Module Size Used by dm_crypt 22820 1 arc4 12615 2 brcmsmac 550698 0 coretemp 13355 0 kvm_intel 132891 0 parport_pc 28152 0 kvm 443165 1 kvm_intel ppdev 17073 0 cordic 12574 1 brcmsmac brcmutil 14755 1 brcmsmac mac80211 606457 1 brcmsmac cfg80211 510937 2 brcmsmac,mac80211 bnep 18036 2 rfcomm 42641 12 joydev 17377 0 applesmc 19353 0 input_polldev 13896 1 applesmc snd_hda_codec_hdmi 36913 1 microcode 22881 0 snd_hda_codec_cirrus 23829 1 nls_iso8859_1 12713 1 uvcvideo 80847 0 btusb 22474 0 snd_hda_intel 39619 3 videobuf2_vmalloc 13056 1 uvcvideo snd_hda_codec 136453 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec_cirrus bcm5974 17347 0 bluetooth 228619 22 bnep,btusb,rfcomm snd_hwdep 13602 1 snd_hda_codec lpc_ich 17061 0 videobuf2_memops 13202 1 videobuf2_vmalloc videobuf2_core 40513 1 uvcvideo videodev 129260 2 uvcvideo,videobuf2_core bcma 41051 1 brcmsmac snd_pcm 97451 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30180 1 snd_seq_midi snd_seq 61554 2 snd_seq_midi_event,snd_seq_midi snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29425 2 snd_pcm,snd_seq snd 68876 16 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_hda_codec_cirrus mei 41158 0 soundcore 12680 1 snd apple_bl 13673 0 mac_hid 13205 0 lp 17759 0 parport 46345 3 lp,ppdev,parport_pc usb_storage 57204 0 hid_apple 13237 0 hid_generic 12540 0 ghash_clmulni_intel 13259 0 aesni_intel 55399 399 aes_x86_64 17255 1 aesni_intel xts 12885 1 aesni_intel lrw 13257 1 aesni_intel gf128mul 14951 2 lrw,xts ablk_helper 13597 1 aesni_intel cryptd 20373 4 ghash_clmulni_intel,aesni_intel,ablk_helper i915 600351 3 ahci 25731 3 libahci 31364 1 ahci video 19390 1 i915 i2c_algo_bit 13413 1 i915 drm_kms_helper 49394 1 i915 usbhid 47074 0 drm 286313 4 i915,drm_kms_helper hid 101002 3 hid_generic,usbhid,hid_apple $ dmesg | egrep 'b43|bcma|brcm|[F]irm' [ 0.055025] [Firmware Bug]: ioapic 2 has no mapping iommu, interrupt remapping will be disabled [ 0.152336] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 2.187681] pci_root PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-99] only partially covers this bridge [ 12.553600] bcma-pci-bridge 0000:02:00.0: enabling device (0000 -> 0002) [ 12.553657] bcma: bus0: Found chip with id 0xA8D8, rev 0x01 and package 0x08 [ 12.553688] bcma: bus0: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x22, class 0x0) [ 12.553715] bcma: bus0: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x17, class 0x0) [ 12.553764] bcma: bus0: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x0F, class 0x0) [ 12.605777] bcma: bus0: Bus registered [ 12.852925] brcmsmac bcma0:0: mfg 4bf core 812 rev 23 class 0 irq 17 [ 13.085176] brcmsmac bcma0:0: brcms_ops_bss_info_changed: qos enabled: false (implement) [ 13.085186] brcmsmac bcma0:0: brcms_ops_config: change power-save mode: false (implement) [ 20.862617] brcmsmac bcma0:0: brcmsmac: brcms_ops_bss_info_changed: associated [ 20.862622] brcmsmac bcma0:0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 0 (implement) [ 20.862625] brcmsmac bcma0:0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 20.897957] brcmsmac bcma0:0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) $ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"wlan" Mode:Managed Frequency:5.22 GHz Access Point: E0:46:9A:4E:63:9A Bit Rate=65 Mb/s Tx-Power=17 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=63/70 Signal level=-47 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:13 Invalid misc:56 Missed beacon:0

    Read the article

  • The Application Architecture Domain

    - by Michael Glas
    I have been spending a lot of time thinking about Application Architecture in the context of EA. More specifically, as an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?There are several definitions of Application Architecture. TOGAF says “The objective here [in Application Architecture] is to define the major kinds of application system necessary to process the data and support the business”. FEA says the Application Architecture “Defines the applications needed to manage the data and support the business functions”.I agree with these definitions. They reflect what the Application Architecture domain does. However, they need to be decomposed to be practical.I find it useful to define a set of views into the Application Architecture domain. These views reflect what an EA needs to consider when working with/in the Applications Architecture domain. These viewpoints are, at a high level:Capability View: This view reflects how applications alignment with business capabilities. It is a super set of the following views when viewed in aggregate. By looking at the Application Architecture domain in terms of the business capabilities it supports, you get a good perspective on how those applications are directly supporting the business.Technology View: The technology view reflects the underlying technology that makes up the applications. Based on the number of rationalization activities I have seen (more specifically application rationalization), the phrase “complexity equals cost” drives the importance of the technology view, especially when attempting to reduce that complexity through standardization type activities. Some of the technology components to be considered are: Software: The application itself as well as the software the application relies on to function (web servers, application servers). Infrastructure: The underlying hardware and network components required by the application and supporting application software. Development: How the application is created and maintained. This encompasses development components that are part of the application itself (i.e. customizable functions), as well as bolt on development through web services, API’s, etc. The maintenance process itself also falls under this view. Integration: The interfaces that the application provides for integration as well as the integrations to other applications and data sources the application requires to function. Type: Reflects the kind of application (mash-up, 3 tiered, etc). (Note: functional type [CRM, HCM, etc.] are reflected under the capability view). Organization View: Organizations are comprised of people and those people use applications to do their jobs. Trying to define the application architecture domain without taking the organization that will use/fund/change it into consideration is like trying to design a car without thinking about who will drive it (i.e. you may end up building a formula 1 car for a family of 5 that is really looking for a minivan). This view reflects the people aspect of the application. It includes: Ownership: Who ‘owns’ the application? This will usually reflect primary funding and utilization but not always. Funding: Who funds both the acquisition/creation as well as the on-going maintenance (funding to create/change/operate)? Change: Who can/does request changes to the application and what process to the follow? Utilization: Who uses the application, how often do they use it, and how do they use it? Support: Which organization is responsible for the on-going support of the application? Information View: Whether or not you subscribe to the view that “information drives the enterprise”, it is a fact that information is critical. The management, creation, and organization of that information are primary functions of enterprise applications. This view reflects how the applications are tied to information (or at a higher level – how the Application Architecture domain relates to the Information Architecture domain). It includes: Access: The application is the mechanism by which end users access information. This could be through a primary application (i.e. CRM application), or through an information access type application (a BI application as an example). Creation: Applications create data in order to provide information to end-users. (I.e. an application creates an order to be used by an end-user as part of the fulfillment process). Consumption: Describes the data required by applications to function (i.e. a product id is required by a purchasing application to create an order. Application Service View: Organizations today are striving to be more agile. As an EA, I need to provide an architecture that supports this agility. One of the primary ways to achieve the required agility in the application architecture domain is through the use of ‘services’ (think SOA, web services, etc.). Whether it is through building applications from the ground up utilizing services, service enabling an existing application, or buying applications that are already ‘service enabled’, compartmentalizing application functions for re-use helps enable flexibility in the use of those applications in support of the required business agility. The applications service view consists of: Services: Here, I refer to the generic definition of a service “a set of related software functionalities that can be reused for different purposes, together with the policies that should control its usage”. Functions: The activities within an application that are not available / applicable for re-use. This view is helpful when identifying duplication functions between applications that are not service enabled. Delivery Model View: It is hard to talk about EA today without hearing the terms ‘cloud’ or shared services.  Organizations are looking at the ways their applications are delivered for several reasons, to reduce cost (both CAPEX and OPEX), to improve agility (time to market as an example), etc.  From an EA perspective, where/how an application is deployed has impacts on the overall enterprise architecture. From integration concerns to SLA requirements to security and compliance issues, the Enterprise Architect needs to factor in how applications are delivered when designing the Enterprise Architecture. This view reflects how applications are delivered to end-users. The delivery model view consists of different types of delivery mechanisms/deployment options for applications: Traditional: Reflects non-cloud type delivery options. The most prevalent consists of an application running on dedicated hardware (usually specific to an environment) for a single consumer. Private Cloud: The application runs on infrastructure provisioned for exclusive use by a single organization comprising multiple consumers. Public Cloud: The application runs on infrastructure provisioned for open use by the general public. Hybrid: The application is deployed on two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability. While by no means comprehensive, I find that applying these views to the application domain gives a good understanding of what an EA needs to consider when effecting changes to the Application Architecture domain.Finally, the application architecture domain is one of several architecture domains that an EA must consider when developing an overall Enterprise Architecture. The Oracle Enterprise Architecture Framework defines four Primary domains: Business Architecture, Application Architecture, Information Architecture, and Technology Architecture. Each domain links to the others either directly or indirectly at some point. Oracle links them at a high level as follows:Business Capabilities and/or Business Processes (Business Architecture), links to the Applications that enable the capability/process (Applications Architecture – COTS, Custom), links to the Information Assets managed/maintained by the Applications (Information Architecture), links to the technology infrastructure upon which all this runs (Technology Architecture - integration, security, BI/DW, DB infrastructure, deployment model). There are however, times when the EA needs to narrow focus to a particular domain for some period of time. These views help me to do just that.

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so

    Read the article

  • 6 Facts About GlassFish Announcement

    - by Bruno.Borges
    Since Oracle announced the end of commercial support for future Oracle GlassFish Server versions, the Java EE world has started wondering what will happen to GlassFish Server Open Source Edition. Unfortunately, there's a lot of misleading information going around. So let me clarify some things with facts, not FUD. Fact #1 - GlassFish Open Source Edition is not dead GlassFish Server Open Source Edition will remain the reference implementation of Java EE. The current trunk is where an implementation for Java EE 8 will flourish, and this will become the future GlassFish 5.0. Calling "GlassFish is dead" does no good to the Java EE ecosystem. The GlassFish Community will remain strong towards the future of Java EE. Without revenue-focused mind, this might actually help the GlassFish community to shape the next version, and set free from any ties with commercial decisions. Fact #2 - OGS support is not over As I said before, GlassFish Server Open Source Edition will continue. Main change is that there will be no more future commercial releases of Oracle GlassFish Server. New and existing OGS 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. In parallel, I believe there's no other company in the Java EE business that offers commercial support to more than one build of a Java EE application server. This new direction can actually help customers and partners, simplifying decision through commercial negotiations. Fact #3 - WebLogic is not always more expensive than OGS Oracle GlassFish Server ("OGS") is a build of GlassFish Server Open Source Edition bundled with a set of commercial features called GlassFish Server Control and license bundles such as Java SE Support. OGS has at the moment of this writing the pricelist of U$ 5,000 / processor. One information that some bloggers are mentioning is that WebLogic is more expensive than this. Fact 3.1: it is not necessarily the case. The initial edition of WebLogic is called "Standard Edition" and falls into a policy where some “Standard Edition” products are licensed on a per socket basis. As of current pricelist, US$ 10,000 / socket. If you do the math, you will realize that WebLogic SE can actually be significantly more cost effective than OGS, and a customer can save money if running on a CPU with 4 cores or more for example. Quote from the price list: “When licensing Oracle programs with Standard Edition One or Standard Edition in the product name (with the exception of Java SE Support, Java SE Advanced, and Java SE Suite), a processor is counted equivalent to an occupied socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.” For more details speak to your Oracle sales representative - this is clearly at list price and every customer typically has a relationship with Oracle (like they do with other vendors) and different contractual details may apply. And although OGS has always been production-ready for Java EE applications, it is no secret that WebLogic has always been more enterprise, mission critical application server than OGS since BEA. Different editions of WLS provide features and upgrade irons like the WebLogic Diagnostic Framework, Work Managers, Side by Side Deployment, ADF and TopLink bundled license, Web Tier (Oracle HTTP Server) bundled licensed, Fusion Middleware stack support, Oracle DB integration features, Oracle RAC features (such as GridLink), Coherence Management capabilities, Advanced HA (Whole Service Migration and Server Migration), Java Mission Control, Flight Recorder, Oracle JDK support, etc. Fact #4 - There’s no major vendor supporting community builds of Java EE app servers There are no major vendors providing support for community builds of any Open Source application server. For example, IBM used to provide community support for builds of Apache Geronimo, not anymore. Red Hat does not commercially support builds of WildFly and if I remember correctly, never supported community builds of former JBoss AS. Oracle has never commercially supported GlassFish Server Open Source Edition builds. Tomitribe appears to be the exception to the rule, offering commercial support for Apache TomEE. Fact #5 - WebLogic and GlassFish share several Java EE implementations It has been no secret that although GlassFish and WebLogic share some JSR implementations (as stated in the The Aquarium announcement: JPA, JSF, WebSockets, CDI, Bean Validation, JAX-WS, JAXB, and WS-AT) and WebLogic understands GlassFish deployment descriptors, they are not from the same codebase. Fact #6 - WebLogic is not for GlassFish what JBoss EAP is for WildFly WebLogic is closed-source offering. It is commercialized through a license-based plus support fee model. OGS although from an Open Source code, has had the same commercial model as WebLogic. Still, one cannot compare GlassFish/WebLogic to WildFly/JBoss EAP. It is simply not the same case, since Oracle has had two different products from different codebases. The comparison should be limited to GlassFish Open Source / Oracle GlassFish Server versus WildFly / JBoss EAP. But the message now is much clear: Oracle will commercially support only the proprietary product WebLogic, and invest on GlassFish Server Open Source Edition as the reference implementation for the Java EE platform and future Java EE 8, as a developer-friendly community distribution, and encourages community participation through Adopt a JSR and contributions to GlassFish. In comparison Oracle's decision has pretty much the same goal as to when IBM killed support for Websphere Community Edition; and to when Red Hat decided to change the name of JBoss Community Edition to WildFly, simplifying and clarifying marketing message and leaving the commercial field wide open to JBoss EAP only. Oracle can now, as any other vendor has already been doing, focus on only one commercial offer. Some users are saying they will now move to WildFly, but it is important to note that Red Hat does not offer commercial support for WildFly builds. Although the future JBoss EAP versions will come from the same codebase as WildFly, the builds will definitely not be the same, nor sharing 100% of their functionalities and bug fixes. This means there will be no company running a WildFly build in production with support from Red Hat. This discussion has also raised an important and interesting information: Oracle offers a free for developers OTN License for WebLogic. For other environments this is different, but please note this is the same policy Red Hat applies to JBoss EAP, as stated in their download page and terms. Oracle had the same policy for OGS. TL;DR; GlassFish Server Open Source Edition isn’t dead. Current and new OGS 2.x/3.x customers will continue to have support (respecting LSP). WebLogic is not necessarily more expensive than OGS. Oracle will focus on one commercially supported Java EE application server, like other vendors also limit themselves to support one build/product only. Community builds are hardly supported. Commercially supported builds of Open Source products are not exactly from the same codebase as community builds. What's next for GlassFish and the Java EE community? There are conversations in place to tackle some of the community desires, most of them stated by Markus Eisele in his blog post. We will keep you posted.

    Read the article

< Previous Page | 31 32 33 34 35 36 37  | Next Page >