Search Results

Search found 5784 results on 232 pages for 'points'.

Page 207/232 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Private Cloud: Putting some method behind the madness

    - by Sudip Datta
    Finally, I decided to join the blogging community. And what could be a better time to start than the week after OpenWorld 2012. 50K+ attendees, demonstrations, speaker sessions and a whole lot of buzz on Oracle Cloud..It was raining clouds in this year's Openworld. I am not here to write about Oracle's cloud strategy in general, but on Enterprise Manager's cloud management capabilities. This year's Openworld was the first after we announced the 12c Cloud Control and we were happy to share the stage with quite a few early adopters. Stay tuned for videos from our customers and partners, I will post them as they get published. I met a number of platform administrators in Oracle-DBAs, Middleware Admins, SOA Admins...The cloud has affected them all, at least to the point where it beckoned more than just curiosity..Most IT infrastructure are already heavily virtualized (on VMWare and on others including Oracle VM), and some would claim they are already on “cloud” (at least their Sysadmins told them so). But none of them were confident of the benefits because their pain points continued to grow.. Isn't cloud supposed to ease those? Instead, they were chasing hundreds of databases running on hundreds of VMs, often with as much certainty propounded by Heisenberg. What happened to the age-old IT discipline around administration, compliance, configuration management? VMs are great for what they are. I personally think they have opened the doors to new approaches in which an application stack gets provisioned and updated. In fact, Enterprise Manager 12c is possibly the only tool out there that can provision full-fledged application as VM Assemblies. In this year's Openworld, customers talked on how they provisioned RAC and Siebel assemblies, which as the techies out there know, are not trivial (hearing provisioning time for Siebel down from weeks to hours was gratifying indeed). However, I do have an issue with a "one-size fits all" approach to cloud. In a week's span, I met several personas: Project owners requiring an EC2 like VM instance for their projects Admins needing the same for Sparc-Solaris. DBAs requiring dedicated databases for new projects APEX Developers needing just a ready-to-consume schema as a service Java Developers looking for a runtime platform QA engineers needing a fast clone of their production environment If you drill down further, you will end up peeling more layers of the details. For example, the requirements for Load testing and Functional testing are very different. For Load testing the test environment should ideally be the same as the production. You shouldn't run production on Exadata and load test on a VM; they will just not be good representations of one another. For Functional testing it does not possibly matter. DBAs seem to be at the worst affected of the lot. It seems they have been asked to choose between agile provisioning and  faster runtime performance. And in some cases, it is really a Hobson's choice, because their infrastructure provider made no distinction between the OLTP application and the Virtual desktop! Sad indeed. When one looks at the portfolio of services that we already offer (vanilla IaaS, VM Assembly based PaaS, DBaaS) or have announced (Java PaaS, Instant Cloning, Schema-aaS), one can possibly think that we are trying to be the "renaissance man" ! Well I would have possibly digested that had it not been for the various personas that I described above. Getting the use cases right is very important for an application such as cloud management. We iterate and iterate over these over and over again and re-validate them in CABs (Customer Advisory Boards). We consider over the major aspects of tenancy: service placement, resource isolation (can a tenant execute an expensive SQL and run away with all the resources), quota and security. We, in Engineering, keep reminding ourselves that we are dealing with enterprise clouds. We owe it to our customer base ! In the coming posts, I will drill down more into each of the services. In the meanwhile, here are some collateral and  demos for starters with EM 12c. http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html Sudip Datta The views expressed here are my own and do not necessarily reflect the views of Oracle. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter --

    Read the article

  • 4 Ways Your Brand Can Jump From the Edge of Space

    - by Mike Stiles
    Can your brand’s social media content captivate the world and make it hold its collective breath? Can you put something on the screen that’s so compelling that your audience can’t look away? Will they want to make sure their friends see it so they can talk about it? If not, you’re probably not with Red Bull. I was impressed with Red Bull’s approach to social content even before Felix Baumgartner’s stunning skydive from the edge of space. And then they did this. According to Visible Measures, videos of the jump scored 50 million views in 4 days. 1,700 clips were generated from both official and organic sources. The live stream was the most watched YouTube Stream of all time (8 million concurrent viewers). The 2nd most watched live stream was…Felix’ first attempt Oct. 9. Are you ready to compete with that? I ask that question because some brands are still out there tying themselves up in knots about whether or not they should tweet. The public’s time and attention are scarce commodities, commodities they value greatly. The competition amongst brands for that time and attention is intense and going up like Felix’s capsule. If you still view your press releases as “content,” you won’t even be counted as being among the competition. Here are 5 lessons learned from Red Bull’s big leap: 1. They have a total understanding of their target market and audience. Not only do they have an understanding of it, they do something about it. They act on it. They fill the majority of their thoughts with what the audience wants. They hunger for wild applause from that audience. They want to do things that embrace the audience’s lifestyle and immerse in it so the target will identify the brand as “one of them.” Takeaway: BE your target market. 2. They deliver content that strikes the audience right where they emotionally live. If you want your content to have impact, you have to make your audience’s heart race, or make them tear up, or make them laugh. Label them “data points” all you want, but humans are emotional creatures. No message connects that’s not carried in on an emotion. Takeaway: You’re on the inside. If your content doesn’t make you say “wow,” it’s unlikely it will register with fans. 3. They put aside old school marketing and don’t let their content be degraded into a commercial. Their execs seem to understand the value in keeping a lid on the hard sell. So many brands just can’t bring themselves to disconnect advertising and social content. The result is, otherwise decent content gets contaminated with a desperation the viewer can smell a mile away. Think the Baumgartner skydive didn’t do Red Bull any good since he wasn’t drinking one on the way down while singing a jingle? Analysis company Taykey discovered that at the peak of the skydive buzz, about 1% of all online conversation was about the jump. Mentions of Red Bull constituted 1/3 of 1% of all Internet activity. Views of other Red Bull videos also shot up. Takeaway: Chill out with the ads. Your brand will get full credit for entertaining/informing fans in a relevant way, provided you do it. 4. They don’t hesitate to ask, “What can we do next”? Most corporate cultures are a virtual training facility for “we can’t do that.” Few are encouraged to innovate or think big, if think at all. Thinking big involves faith, and work. It means freedom and letting employees run a little wild with their ideas. There will always be the opportunity to let fear of everything that moves creep in and kill grand visions dead in their tracks. Experimenting must be allowed. Failure must be allowed. Red Bull didn’t think big. They thought mega. They tried to outdo themselves. Felix could have gone ahead and jumped halfway up, thinking, “This is still relatively high up. Good enough.” But that wouldn’t have left us breathless. Takeaway: Go for it. Jump. In putting up social properties and gathering fans of your brand, you’ve basically invited people to a party. A good host doesn’t just set out warm beer and stale chips because that’s inexpensive and easy. Be on the lookout for ways to make your guests walk away saying, “That was epic.”

    Read the article

  • Avoiding coupling

    - by Seralize
    It is also true that a system may become so coupled, where each class is dependent on other classes that depend on other classes, that it is no longer possible to make a change in one place without having a ripple effect and having to make subsequent changes in many places.[1] This is why using an interface or an abstract class can be valuable in any object-oriented software project. Quote from Wikipedia Starting from scratch I'm starting from scratch with a project that I recently finished because I found the code to be too tightly coupled and hard to refactor, even when using MVC. I will be using MVC on my new project aswell but want to try and avoid the pitfalls this time, hopefully with your help. Project summary My issue is that I really wish to keep the Controller as clean as possible, but it seems like I can't do this. The basic idea of the program is that the user picks wordlists which is sent to the game engine. It will pick random words from the lists until there are none left. Problem at hand My main problem is that the game will have 'modes', and need to check the input in different ways through a method called checkWord(), but exactly where to put this and how to abstract it properly is a challenge to me. I'm new to design patterns, so not sure whether there exist any might fit my problem. My own attempt at abstraction Here is what I've gotten so far after hours of 'refactoring' the design plans, and I know it's long, but it's the best I could do to try and give you an overview (Note: As this is the sketch, anything is subject to change, all help and advice is very welcome. Also note the marked coupling points): Wordlist class Wordlist { // Basic CRUD etc. here! // Other sample methods: public function wordlistCount($user_id) {} // Returns count of how many wordlists a user has public function getAll($user_id) {} // Returns all wordlists of a user } Word class Word { // Basic CRUD etc. here! // Other sample methods: public function wordCount($wordlist_id) {} // Returns count of words in a wordlist public function getAll($wordlist_id) {} // Returns all words from a wordlist public function getWordInfo($word_id) {} // Returns information about a word } Wordpicker class Wordpicker { // The class needs to know which words and wordlists to exclude protected $_used_words = array(); protected $_used_wordlists = array(); // Wordlists to pick words from protected $_wordlists = array(); /* Public Methods */ public function setWordlists($wordlists = array()) {} public function setUsedWords($used_words = array()) {} public function setUsedWordlists($used_wordlists = array()) {} public function getRandomWord() {} // COUPLING POINT! Will most likely need to communicate with both the Wordlist and Word classes /* Protected Methods */ protected function _checkAvailableWordlists() {} // COUPLING POINT! Might need to check if wordlists are deleted etc. protected function _checkAvailableWords() {} // COUPLING POINT! Method needs to get all words in a wordlist from the Word class } Game class Game { protected $_session_id; // The ID of a game session which gets stored in the database along with game details protected $_game_info = array(); // Game instantiation public function __construct($user_id) { if (! $this->_session_id = $this->_gameExists($user_id)) { // New game } else { // Resume game } } // This is the method I tried to make flexible by using abstract classes etc. // Does it even belong in this class at all? public function checkWord($answer, $native_word, $translation) {} // This method checks the answer against the native word / translation word, depending on game mode public function getGameInfo() {} // Returns information about a game session, or creates it if it does not exist public function deleteSession($session_id) {} // Deletes a game session from the database // Methods dealing with game session information protected function _gameExists($user_id) {} protected function _getProgress($session_id) {} protected function _updateProgress($game_info = array()) {} } The Game /* CONTROLLER */ /* "Guess the word" page */ // User input $game_type = $_POST['game_type']; // Chosen with radio buttons etc. $wordlists = $_POST['wordlists']; // Chosen with checkboxes etc. // Starts a new game or resumes one from the database $game = new Game($_SESSION['user_id']); $game_info = $game->getGameInfo(); // Instantiates a new Wordpicker $wordpicker = new Wordpicker(); $wordpicker->setWordlists((isset($game_info['wordlists'])) ? $game_info['wordlists'] : $wordlists); $wordpicker->setUsedWordlists((isset($game_info['used_wordlists'])) ? $game_info['used_wordlists'] : NULL); $wordpicker->setUsedWords((isset($game_info['used_words'])) ? $game_info['used_words'] : NULL); // Fetches an available word if (! $word_id = $wordpicker->getRandomWord()) { // No more words left - game over! $game->deleteSession($game_info['id']); redirect(); } else { // Presents word details to the user $word = new Word(); $word_info = $word->getWordInfo($word_id); } The Bit to Finish /* CONTROLLER */ /* "Check the answer" page */ // ?????????????????? ( http://pastebin.com/cc6MtLTR ) Make sure you toggle the 'Layout Width' to the right for a better view. Thanks in advance. Questions To which extent should objects be loosely coupled? If object A needs info from object B, how is it supposed to get this without losing too much cohesion? As suggested in the comments, models should hold all business logic. However, as objects should be independent, where to glue them together? Should the model contain some sort of "index" or "client" area which connects the dots? Edit: So basically what I should do for a start is to make a new model which I can more easily call with oneliners such as $model->doAction(); // Lots of code in here which uses classes! How about the method for checking words? Should it be it's own object? I'm not sure where I should put it as it's pretty much part of the 'game'. But on another hand, I could just leave out the 'abstraction and OOPness' and make it a method of the 'client model' which will be encapsulated from the controller anyway. Very unsure about this.

    Read the article

  • Identity Globe Trotters (Sep Edition): The Social Customer

    - by Tanu Sood
    Welcome to the inaugural edition of our monthly series - Identity Globe Trotters. Starting today, the last Friday of every month, we will explore regional commentary on Identity Management. We will invite guest contributors from around the world to share their opinions and experiences around Identity Management and highlight regional nuances, specific drivers, solutions and more. Today's feature is contributed by Michael Krebs, Head of Business Development at esentri consulting GmbH, a (SOA) specialized Oracle Gold Partner based in Ettlingen, Germany. In his current role, Krebs is dealing with the latest developments in Enterprise Social Networking and the Integration of Social Media within business processes.  By Michael Krebs The relevance of "easy sign-on" in the age of the "Social Customer" With the growth of Social Networks, the time people spend within those closed "eco-systems" is growing year by year. With social networks looking to integrate search engines, like Facebook announced some weeks ago, their relevance will continue to grow in contrast to the more conventional search engines. This is one of the reasons why social network accounts of the users are getting more and more like a virtual fingerprint. With the growing relevance of social networks the importance of a simple way for customers to get in touch with say, customer care or contract departments, will be crucial for sales processes in critical markets. Customers want to have one single point of contact and also an easy "login-method" with no dedicated usernames, passwords or proprietary accounts. The golden rule in the future social media driven markets will be: The lower the complexity of the initial contact, the better a company can profit from social networks. If you, for example, can generate a smart way of how an existing customer can use self-service portals, the cost in providing phone support can be lowered significantly. Recruiting and Hiring of "Digital Natives" Another particular example is "social" recruiting processes. The so called "digital natives" don´t want to type in their profile facts and CV´s in proprietary systems. Why not use the actual LinkedIn profile? In German speaking region, the market in the area of professional social networks is dominated by XING, the equivalent to LinkedIn. A few weeks back, this network also opened up their interfaces for integrating social sign-ons or the usage of profile data for recruiting-purposes. In the European (and especially the German) employment market, where the number of young candidates is shrinking because of the low birth rate in the region, it will become essential to use social-media supported hiring processes to find and on-board the rare talents. In fact, you will see traditional recruiting websites integrated with social hiring to attract the best talents in the market, where the pool of potential candidates has decreased dramatically over the years. Identity Management as a key factor in the Customer Experience process To create the biggest value for customers and also future employees, companies need to connect their HCM or CRM-systems with powerful Identity management solutions. With the highly efficient Oracle (social & mobile enabling) Identity Management solution, enterprises can combine easy sign on with secure connections to the backend infrastructure. This combination enables a "one-stop" service with personalized content for customers and talents. In addition, companies can collect valuable data for the enrichment of their CRM-data. The goal is to enrich the so called "Customer Experience" via all available customer channels and contact points. Those systems have already gained importance in the B2C-markets and will gradually spread out to B2B-channels in the near future. Conclusion: Central and "Social" Identity management is key to Customer Experience Management and Talent Management For a seamless delivery of "Customer Experience Management" and a modern way of recruiting the best talent, companies need to integrate Social Sign-on capabilities with modern CX - and Talent management infrastructure. This lowers the barrier for existing and future customers or employees to get in touch with sales, support or human resources. Identity management is the technology enabler and backbone for a modern Customer Experience Infrastructure. Oracle Identity management solutions provide the opportunity to secure Social Applications and connect them with modern CX-solutions. At the end, companies benefit from "best of breed" processes and solutions for enriching customer experience without compromising security. About esentri: esentri is a provider of enterprise social networking and brings the benefits of social network communication into business environments. As one key strength, esentri uses Oracle Identity Management solutions for delivering Social and Mobile access for Oracle’s CRM- and HCM-solutions. …..End Guest Post…. With new and enhanced features optimized to secure the new digital experience, the recently announced Oracle Identity Management 11g Release 2 enables organizations to securely embrace cloud, mobile and social infrastructures and reach new user communities to help further expand and develop their businesses. Additional Resources: Oracle Identity Management 11gR2 release Oracle Identity Management website Datasheet: Mobile and Social Access (pdf) IDM at OOW: Focus on Identity Management Facebook: OracleIDM Twitter: OracleIDM We look forward to your feedback on this post and welcome your suggestions for topics to cover in Identity Globe Trotters. Last Friday, every month!

    Read the article

  • University teaches DOS-style C++, how to deal with it

    - by gaidal
    Half a year ago I had a look at available programming educations. I chose this one because unlike most of the choices: The majority of the courses seemed to be about something concrete and useful; the languages used are C++ and Java which are platform-independent; later courses include developing for mobile devices and a course on Android development, which seemed modern and relevant. Now after two introductory courses we're just starting with C++, and my programming professor seems a bit weird. He's tested us on things like "why should you use constants" and "why are globals bad" in a kind of mechanical way, without much context, before teaching actual programming. His handouts use system("pause"), system("cls"), and getch() from some conio.h that seems ancient according to what I've read. I just did a task that was about printing the "ASCII letters from 32 to 255" (huh?), with an example picture showing a table with Windows' Extended ASCII - of course I got other results for 128-255 on my Arch Linux that uses Unicode, and this isn't mentioned at all. I don't know, it just doesn't seem right... As if he is teaching programming because he has to, perhaps? Should I bring such things up? Hmm. I was looking forward to learning from someone who really knows stuff, and in an academic, rigorous way, like SICP or something. Aren't professors in programming supposed to be like that? I studied math for a while and every teacher and assistant there were really precise about what they said, but this is my second programming teacher that is sort of disappointing. Oh well. Now, question: Is this what to expect from universities or Not OK, and how do I deal with it? I have never touched the language C++ (or C) until now, and am not the right person to jump up and say "This is So Wrong!", so if I google something and find 10 people who say "xxx is blasphemy", how do I skillfully communicate this? I do think it would be better for those classmates who are total beginners not to learn bad habits (such as these vibes of total ignorance of other platforms!) during the upcoming courses, but don't want to disrespect the teacher. I don't know if it's reasonable or just cocky to bring up things like "what about other platforms?" or "but what about this article or stackoverflow answer that I read that said..." for every assignment? Or, if he keeps ignoring non-Windows-programming, should I give up and focus on my own projects or somehow argue that this really isn't OK nowadays? Are there any programming teachers out there, what do you think? By the way these are web-based courses, all interaction between teachers and students takes place in a forum. EDIT: A few answers seem to be making some incorrect assumptions, so maybe I should add a few things. I have been doing programming for fun on and off for 10 years, am pretty comfortable in 3 languages and read programming blogs et c regularly. Also, I feel kind of done being a student, having a degree in another field. I just need another, relevant diploma to work as a programmer, so I'm going back for that. Studying computer science for 5 years is not for me anymore, even though I enjoy learning and solving problems in my free time. Second, let me highlight that I don't expect it to be like the industry at all, quite the contrary. I expect it to be academic, dry and unnecessarily correct. No, it's not just math. Every professor I have had in math, or Japanese (major) or Chinese (minor) have been very very academic, discussing subtle points for hours with passion. But the courses I'm taking now and a previous one in programming don't seem serious. They neither resemble industry NOR academia. That is the problem. And it's not because I can't learn programming anyway. Third, I don't necessarily want to learn C++ or Android development, and I know I could teach myself those and anything else if I wanted to. But I am going back to school anyway, and those platform-independent languages and mobile stuff made me think that maybe they're serious about teaching something relevant here. Seems like I got this wrong, but we'll see.

    Read the article

  • XNA - Use Mouse To Rotate & Arrow Keys To Scroll A Linearly Wrapped Texture:

    - by The Thing
    Using XNA I'm working on my first, relatively simple, videogame for the PC. At the moment my game window is 1024 X 768 and I have a 'Starfield' linearly wrapped background texture 1280 X 1280 in size whose origin has been set to its center point (width / 2, height / 2). This texture is drawn onscreen using (graphics.PreferredBackBufferWidth / 2, graphics.PreferredBackBufferHeight / 2) to place the origin in the center of the window. I want to be able to use the horizontal movement of the mouse to rotate my texture left or right and use the arrow keys to scroll the texture in four directions. From my own related coding experiments I have found that once I rotate the texture it no longer scrolls in the direction I want, it's as if somehow the XNA framework's 'sense of direction' has been 'rotated' along with the texture. As an example of what I've described above lets say I rotate the texture 45 degrees to the right, then pressing the up arrow key results in the texture scrolling diagonally from top-right to bottom-left. This is not what I want, regardless of the degree or direction of rotation I want my texture to scroll straight up, straight down, or to the left or right depending on which arrow key was pressed. How do I go about accomplishing this? Any help or guidance is appreciated. To finish up there are two points I'd like to clarify: [1] The reason I'm using linear wrapping on my starfield texture is that it gives a nice impression of an endless starfield. [2] Using a texture at least 1280 X 1280 in conjunction with a game window of 1024 X 768 means that at no point in it's rotation will the edges of the texture become visible. Thanks for reading..... Update # 1 - as requested by RCIX: The code below is what I was referring to earlier when I mentioned 'related coding experiments'. As you can see I am scrolling a linearly wrapped texture in the direction I've moved the mouse relative to the center of the screen. This works perfectly if I don't rotate the texture, but once I do rotate it the direction of the scrolling gets messed up for some reason. public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; int x; int y; float z = 250f; Texture2D Overlay; Texture2D RotatingBackground; Rectangle? sourceRectangle; Color color; float rotation; Vector2 ScreenCenter; Vector2 Origin; Vector2 scale; Vector2 Direction; SpriteEffects effects; float layerDepth; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { graphics.PreferredBackBufferWidth = 1024; graphics.PreferredBackBufferHeight = 768; graphics.ApplyChanges(); Direction = Vector2.Zero; IsMouseVisible = true; ScreenCenter = new Vector2(graphics.PreferredBackBufferWidth / 2, graphics.PreferredBackBufferHeight / 2); Mouse.SetPosition((int)graphics.PreferredBackBufferWidth / 2, (int)graphics.PreferredBackBufferHeight / 2); sourceRectangle = null; color = Color.White; rotation = 0.0f; scale = new Vector2(1.0f, 1.0f); effects = SpriteEffects.None; layerDepth = 1.0f; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); Overlay = Content.Load<Texture2D>("Overlay"); RotatingBackground = Content.Load<Texture2D>("Background"); Origin = new Vector2((int)RotatingBackground.Width / 2, (int)RotatingBackground.Height / 2); } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { float timePassed = (float)gameTime.ElapsedGameTime.TotalSeconds; MouseState ms = Mouse.GetState(); Vector2 MousePosition = new Vector2(ms.X, ms.Y); Direction = ScreenCenter - MousePosition; if (Direction != Vector2.Zero) { Direction.Normalize(); } x += (int)(Direction.X * z * timePassed); y += (int)(Direction.Y * z * timePassed); //No rotation = texture scrolls as intended, With rotation = texture no longer scrolls in the direction of the mouse. My update method needs to somehow compensate for this. //rotation += 0.01f; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { spriteBatch.Begin(SpriteSortMode.Deferred, null, SamplerState.LinearWrap, null, null); spriteBatch.Draw(RotatingBackground, ScreenCenter, new Rectangle(x, y, RotatingBackground.Width, RotatingBackground.Height), color, rotation, Origin, scale, effects, layerDepth); spriteBatch.Draw(Overlay, Vector2.Zero, Color.White); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • The Next Frontier: Java Embedded @ JavaOne

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Now more than ever, the Java platform is the best technology for many embedded use cases. Java’s platform independence, high level of functionality, security, and developer productivity, address the key pain points in building embedded solutions... and that’s not just our opinion. Take a look at the new IDC report on Oracle’s stewardship of Java, “Java: Two and a half Years After the Acquisition” (doc #236309, August 2012). Java already powers around 3 billion devices worldwide, with traditional desktops and servers being only a small portion of that, and the ‘Internet of Things‘ is just really starting to explode. It is estimated that within five years, intelligent and connected embedded devices will outnumber desktops and mobile phones combined, and will generate the majority of the traffic on the Internet. Is your platform and services strategy ready for the coming disruptions and opportunities? It should come as no surprise that Oracle is enthusiastically focused on Java for Embedded .  New this year, Oracle is demonstrating its further commitment to the embedded marketplace by offering, for the first time, a dedicated conference focused on the business aspects of embedded Java: Java Embedded @ JavaOne. Co-located with the technically-focused JavaOne conference, Java Embedded @ JavaOne will run for two days in San Francisco targeting C-level executives, architects, business leaders, and decision makers. With 24 inspired business sessions with expert speakers from 18 prominent companies driving the next generation of Java Embedded business solutions (such as Cinterion, ARM, Hitachi and Rockwell Automation), attendees will learn how Java Embedded technologies and solutions can offer compelling value and a clear path forward to business efficiency and agility. You’ll also see how Oracle’s comprehensive technology portfolio can deliver a complete ‘Machine to Machine’ platform, from device to datacenter, resulting in a highly secure, resilient, high-performance and cost-effective solution. Seating is limited and we expect a lot of interest in this new event, so please register now! Note that if you are already attending the Oracle OpenWorld or JavaOne conferences, you can attend this conference for only $100 more. Watch my video below to find out more. I hope to see you there! Judson Althoff SVP of WWA&C Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management?

    - by andrewbrust
    Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days.  But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market. If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not.  HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic.  Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good. But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks. What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market. Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too.  When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors. The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

    Read the article

  • Getting Started with ADF Mobile Sample Apps

    - by Denis T
    Getting Started with ADF Mobile Sample Apps   Installation Steps Install JDeveloper 11.1.2.3.0 from Oracle Technology Network After installing JDeveloper, go to Help menu and select "Check For Updates" and find the ADF Mobile extension and install this. It will require you restart JDeveloper For iOS development, be on a Mac and have Xcode installed. (Currently only Xcode 4.4 is officially supported. Xcode 4.5 support is coming soon) For Android development, have the Android SDK installed. In the JDeveloper Tools menu, select "Preferences". In the Preferences dialog, select ADF Mobile. You can expand it to select configure your Platform preferences for things like the location of Xcode and the Android SDK. In your /jdeveloper/jdev/extensions/oracle.adf.mobile/Samples folder you will find a PublicSamples.zip. Unzip this into the Samples folder so you have all the projects ready to go. Open each of the sample application's .JWS file to open the corresponding workspace. Then from the "Application" menu, select "Deploy" and then select the deployment profile for the platform you wish to deploy to. Try deploying to the simulator/emulator on each platform first because it won't require signing. Note: If you wish to deploy to the Android emulator, it must be running before you start the deployment.   Sample Application Details   Recommended Order of Use Application Name Description 1 HelloWorld The "hello world" application for ADF Mobile, which demonstrates the basic structure of the framework. This basic application has a single application feature that is implemented with a local HTML file. Use this application to ascertain that the development environment is set up correctly to compile and deploy an application. See also Section 4.2.2, "What Happens When You Create an ADF Mobile Application." 2 CompGallery This application is meant to be a runtime application and not necessarily to review the code, though that is available. It serves as an introduction to the ADF Mobile AMX UI components by demonstrating all of these components. Using this application, you can change the attributes of these components at runtime and see the effects of those changes in real time without recompiling and redeploying the application after each change. See generally Chapter 8, "Creating ADF Mobile AMX User Interface." 3 LayoutDemo This application demonstrates the user interface layout and shows how to create the various list and button styles that are commonly used in mobile applications. It also demonstrates how to create the action sheet style of a popup component and how to use various chart and gauge components. See Section 8.3, "Creating and Using UI Components" and Section 8.5, "Providing Data Visualization." Note: This application must be opened from the Samples directory or the Default springboard option must be cleared in the Applications page of the adfmf-application.xml overview editor, then selected again. 4 JavaDemo This application demonstrates how to bind the user interface to Java beans. It also demonstrates how to invoke EL bindings from the Java layer using the supplied utility classes. See also Section 8.10, "Using Event Listeners" and Section 9.2, "Understanding EL Support." 5 Navigation This application demonstrates the various navigation techniques in ADF Mobile, including bounded task flows and routers. It also demonstrates the various page transitions. See also Section 7.2, "Creating Task Flows." Note: This application must be opened from the Samples directory or the Default springboard option must be cleared in the Applications page of the adfmf-application.xml overview editor, then selected again. 6 LifecycleEvents This application implements lifecycle event handlers on the ADF Mobile application itself and its embedded application features. This application shows you where to insert code to enable the applications to perform their own logic at certain points in the lifecycle. See also Section 5.6, "About Lifecycle Event Listeners." Note: iOS, the LifecycleEvents sample application logs data to the Console application, located at Applications-Utilities-Console application. 7 DeviceDemo This application shows you how to use the DeviceFeatures data control to expose such device features as geolocation, e-mail, SMS, and contacts, as well as how to query the device for its properties. See also Section 9.5, "Using the DeviceFeatures Data Control." Note: You must also run this application on an actual device because SMS and some of the device properties do not function on an iOS simulator or Android emulator. 8 GestureDemo This application demonstrates how gestures can be implemented and used in ADF Mobile applications. See also Section 8.4, "Enabling Gestures." 9 StockTracker This application demonstrates how data change events use Java to enable data changes to be reflected in the user interface. It also has a variety of layout use cases, gestures and basic mobile patterns. See also Section 9.7, "Data Change Events."

    Read the article

  • Are there Negative Impact of opensource on commercial environment?

    - by Lostsoul
    I know this is not a good fit for Stack Overflow but wasn't sure if it was good for this site also so let me know if its not and I'll delete it. I love programming for fun but my role in my company is not technical. I have always loved the hacker culture and have been trying to drive that openness within my company from day one. My company has a very broad range of products and there are a few that are not strategic to us so I wanted to open source them (so we can focus on what makes us unique and open source the products that every firm has). Our industry does not open source(we would be the first firm to try this) and the feedback I'm getting from my management team is either 1) we'll destroy the industry or 2) all competitive commercial firms will unite against us and we'll be wiped out either way. I disagreed on both points because I think transparency will only grow our industry and our firm (think of McDonalds/KFC sharing their recipe openly, people may copy you, competitors may target you, but customers also may feel more comfortable buying your product. The value add, I believe, is in the delivery and experience not in hoarding the recipe). It's a big battle in my firm right now between the IT people who have seen the positive effects of sharing and the business people who think we'll be giving up everything (they prefer we sell parts we want to opensource, but in their defense this is standard when divesting something). Our industry is very secretive and I don't want to put anyone(even my competitors employees) out of a job yet I don't want to protect inefficient people by not being open with everyone. Yet I've seen so many amazing technologies created in interesting ways just by giving people freedom to take apart code and put it back together. I'm interested in hearing people's thoughts(doesn't have to be to my specific situation, I'm looking for the general lessons). Its a very stressful decision(but one I feel I must make) because if we go the open source route then there will be no going back. So what are your thoughts? Does open sourcing apply generally or is it only really applicable to software? Is it overall good for people in the industry and outside? I'm actually more interested in the negativeness effects(although positive are welcomed as well) Update: Long story short, although code is involved this is not so much about code as it is more about the idea of open sourcing. We are a mid sized quant hedge fund. We have some unique strategies but also have the standard long/short, arbitrage, global macro, etc.. funds. We are keeping the unique funds we have but the other stuff that everyone else has we are considering open sourcing (We have put in years of work & millions of dollars into. Our funds is pretty popular and our performance is either in first or second quartile so I suspect there will be interest but I don't know to what extent). The goal is not to get a community to work for us or anything, the goal is to let anyone who wants to tinker with it do so and create anything they want (it will not be part of our product line although I may unofficially allocate some our of staff's time to assist any community that grows). Although the code base is quite large, the value in this is the industry knowledge and approaches we have acquired (there are many books on artificial intelligence and quant trading but they are often years behind what's really going on as most firms forbid their staff from discussing what they are doing). We are also considering after we move our clients out to let the software still run and output the resulting portfolios for free as well so people can at least see the results(as long as we have avail. infrastructure). I think our main choices are, we can continue to fight for market share in a products that are becoming commoditized, we can shut the funds/products down(and keep the code but no one outside of our firm will ever learn from it) or we can open source it and let people do what they want. By open sourcing it, my idea is that the talent pool in the industry will grow because right now most of our hires have the same background (CFA, MBA, similar school, same experience,etc.. because we can't spend time training people so the industry 'standardizes' most people and thus the firms themselves start to look/act similar) but this may allow us to identify talent that has never been in the industry before (if we put a GPU license then as people learn from what we did, we can learn from what they do as well and maybe apply it to other areas of our firm). I see a lot of benefits but not many negatives while my peers at the company see the opposite.

    Read the article

  • How to move a rectangle properly?

    - by bodycountPP
    I recently started to learn OpenGL. Right now I finished the first chapter of the "OpenGL SuperBible". There were two examples. The first had the complete code and showed how to draw a simple triangle. The second example is supposed to show how to move a rectangle using SpecialKeys. The only code provided for this example was the SpecialKeys method. I still tried to implement it but I had two problems. In the previous example I declared and instaciated vVerts in the SetupRC() method. Now as it is also used in the SpecialKeys() method, I moved the declaration and instantiation to the top of the code. Is this proper c++ practice? I copied the part where vertex positions are recalculated from the book, but I had to pick the vertices for the rectangle on my own. So now every time I press a key for the first time the rectangle's upper left vertex is moved to (-0,5:-0.5). This ok because of GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y But I also think that this is the reason why my rectangle is shifted in the beginning. After the first time a key was pressed everything works just fine. Here is my complete code I hope you can help me on those two points. GLBatch squareBatch; GLShaderManager shaderManager; //Load up a triangle GLfloat vVerts[] = {-0.5f,0.5f,0.0f, 0.5f,0.5f,0.0f, 0.5f,-0.5f,0.0f, -0.5f,-0.5f,0.0f}; //Window has changed size, or has just been created. //We need to use the window dimensions to set the viewport and the projection matrix. void ChangeSize(int w, int h) { glViewport(0,0,w,h); } //Called to draw the scene. void RenderScene(void) { //Clear the window with the current clearing color glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); GLfloat vRed[] = {1.0f,0.0f,0.0f,1.0f}; shaderManager.UseStockShader(GLT_SHADER_IDENTITY,vRed); squareBatch.Draw(); //perform the buffer swap to display the back buffer glutSwapBuffers(); } //This function does any needed initialization on the rendering context. //This is the first opportunity to do any OpenGL related Tasks. void SetupRC() { //Blue Background glClearColor(0.0f,0.0f,1.0f,1.0f); shaderManager.InitializeStockShaders(); squareBatch.Begin(GL_QUADS,4); squareBatch.CopyVertexData3f(vVerts); squareBatch.End(); } //Respond to arrow keys by moving the camera frame of reference void SpecialKeys(int key,int x,int y) { GLfloat stepSize = 0.025f; GLfloat blockSize = 0.5f; GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y if(key == GLUT_KEY_UP) { blockY += stepSize; } if(key == GLUT_KEY_DOWN){blockY -= stepSize;} if(key == GLUT_KEY_LEFT){blockX -= stepSize;} if(key == GLUT_KEY_RIGHT){blockX += stepSize;} //Recalculate vertex positions vVerts[0] = blockX; vVerts[1] = blockY - blockSize*2; vVerts[3] = blockX + blockSize * 2; vVerts[4] = blockY - blockSize *2; vVerts[6] = blockX+blockSize*2; vVerts[7] = blockY; vVerts[9] = blockX; vVerts[10] = blockY; squareBatch.CopyVertexData3f(vVerts); glutPostRedisplay(); } //Main entry point for GLUT based programs int main(int argc, char** argv) { //Sets the working directory. Not really needed gltSetWorkingDirectory(argv[0]); //Passes along the command-line parameters and initializes the GLUT library. glutInit(&argc,argv); //Tells the GLUT library what type of display mode to use, when creating the window. //Double buffered window, RGBA-Color mode,depth-buffer as part of our display, stencil buffer also available glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA|GLUT_DEPTH|GLUT_STENCIL); //Window size glutInitWindowSize(800,600); glutCreateWindow("MoveRect"); glutReshapeFunc(ChangeSize); glutDisplayFunc(RenderScene); glutSpecialFunc(SpecialKeys); //initialize GLEW library GLenum err = glewInit(); //Check that nothing goes wrong with the driver initialization before we try and do any rendering. if(GLEW_OK != err) { fprintf(stderr,"Glew Error: %s\n",glewGetErrorString); return 1; } SetupRC(); glutMainLoop(); return 0; }

    Read the article

  • Teamviewer: cannot control monitor 1, but can control monitor 2

    - by DaveT
    I'm using the web client of Teamviewer from my work computer trying to control my home computer. I have 2 monitors on the remote desktop, but for some reason only have control on the second monitor. When I switch to the main monitor (monitor 1), I cannot do anything and cannot even move the cursor. But I have no issues when I switch over to the second monitor (monitor 2). I used to have no issues with either, but in the past couple of months this has been causing me issues. Anyone have a suggestion? Thanks!! Also... Here is the log from the Teamviewer session. Showing me switching back and forth between the monitors. (just in case this will help). I had to remove the links in order to post the log since I don't have enough reputation points, but they were just teamviewer login weblinks. =============================================================================== 21.08 16:00:41,176: Version: 9.0.15099 21.08 16:00:41,177: Sandbox: remote 21.08 16:00:41,177: SysLanguage: en 21.08 16:00:41,177: VarLanguage: en 21.08 16:00:41,177: Flash Player: PlugIn (WIN 14,0,0,179) 21.08 16:00:41,178: UseLanguage: en 21.08 16:00:41,178: UseLanguage: en 21.08 16:00:41,182: TeamViewer hasPassword: true 21.08 16:00:41,418: ExternalConnect id=910035824 21.08 16:00:41,419: CT connect 910035824 masterURL: , sandbox = remote 21.08 16:00:41,425: MC.requestRoute(910035824) 21.08 16:00:41,426: MC.sendMasterCommand text=F=RequestRoute2&ID1=777&Client=TV& ID2=910035824&SA_AccountID=26641022&SA_PasswordMD5HashBase64Encoded=& SA_SessionSecret=f7H6Z7SYfX5ahQ7SJq/r/K20PBYg9fOZhp+DKLhf5ts=&SA_SessionID=1558929948& V=9.0.15099&OS=Flash 21.08 16:00:41,426: MC wait for ping completion 21.08 16:00:42,064: PS.socket event: [Event type="connect" bubbles=false cancelable=false eventPhase=2] 21.08 16:00:42,182: PingThread: TCP-Ping ok 21.08 16:00:42,183: MC.socket mode = TCP, MasterURL: 21.08 16:00:42,183: MC.connect: 21.08 16:00:43,058: PS.socket event: [Event type="connect" bubbles=false cancelable=false eventPhase=2] 21.08 16:00:43,058: MC.connectHandler: [Event type="connect" bubbles=false cancelable=false eventPhase=2] 21.08 16:00:43,236: MC.requestRouteResponse: [email protected]_10800_128000_762319420_910035824_10000__1_0_16778176_128000_16778176: 128000;2147483647:1280000;4:640000_786297_786297 21.08 16:00:43,239: CT init socket: TCP 21.08 16:00:43,513: PS.socket event: [Event type="connect" bubbles=false cancelable=false eventPhase=2] 21.08 16:00:43,514: CT.connectHandler: [Event type="connect" bubbles=false cancelable=false eventPhase=2] 21.08 16:00:43,519: Browser name: Netscape 21.08 16:00:43,936: CMD_IDENTIFY id=910035824 ver=2.41 21.08 16:00:44,666: CMD_CONFIRMENCRYPTION: encryption confirmed 21.08 16:00:44,667: Started resendrequest timer 21.08 16:00:45,063: Remote Version: TV 009.000 21.08 16:00:45,501: start classic authentication 21.08 16:00:45,502: Login::SendRequestToConsole(): url= 21.08 16:00:45,828: start srp authentication 21.08 16:00:46,983: checkFirstPacket ok, m_LastReceivedPacketID =4 21.08 16:00:47,148: Login::SendRequestToConsole(): url= 21.08 16:00:47,478: start srp authentication 21.08 16:00:48,210: Login::SendRequestToConsole(): url= 21.08 16:00:48,485: checkFirstPacket ok, m_LastReceivedPacketID =7 21.08 16:00:48,780: TVCmdAuthenticate_Authenticated: 1 21.08 16:00:49,321: Connected to 910035824, name=NEWMAN, os=14, version=9.0.31064 21.08 16:00:49,329: ConnectionAccessSettings: RemoteControl: AllowedFileTransfer: AllowedControlRemoteTV: AllowedSwitchSides: DeniedAllowDisableRemoteInput: AllowedAllowVPN: AllowedAllowPartnerViewDesktop: Allowed 21.08 16:00:52,195: unexpected TVCommand.CommandType == 56 21.08 16:00:52,231: CW received display params: 1680x1050x8 monitors: 2 (active:0) 21.08 16:00:52,301: Caching active, version=2 21.08 16:03:47,158: CW received display params: 1680x1050x8 monitors: 2 (active:1) 21.08 16:04:24,447: CW received display params: 1680x1050x8 monitors: 2 (active:0) 21.08 16:04:40,609: CW received display params: 3360x1050x8 monitors: 2 (active:-1) 21.08 16:04:59,802: CW received display params: 1680x1050x8 monitors: 2 (active:1) 21.08 16:04:59,933: CW received display params: 1680x1050x8 monitors: 2 (active:1) 21.08 16:05:58,419: CW received display params: 1680x1050x8 monitors: 2 (active:0) 21.08 16:06:36,824: CW received display params: 1680x1050x8 monitors: 2 (active:1) 21.08 16:07:07,232: CW received display params: 1680x1050x8 monitors: 2 (active:0)

    Read the article

  • Setting up Edimax EW-7206APg as Universal Repeater

    - by Ondra Žižka
    Hi, I've troubles setting up Edimax EW-7206APg as a Universal Repeater. I've read few manuals, but they are unclear on certain points. I've managed the repeater to get to a state when it's in a "connected" state. I've set the same WPA passphrase as the router has because I haven't seen any other place to set it at. These are my settings: System Uptime 0day:1h:33m:11s Hardware Version Rev. A Runtime Code Version 1.32 Wireless Configuration Mode Universal Repeater ESSID edimax Channel Number 6 Security WPA-shared key BSSID 00:c0:9f:40:bd:38 Associated Clients 0 Wireless Repeater Interface Configuration ESSID Dusan Security WPA BSSID 00:4f:62:23:8f:7e State Connected LAN Configuration IP Address 192.168.0.10 Subnet Mask 255.255.255.0 Default Gateway 192.168.0.1 MAC Address 00:c0:9f:40:bd:37 This is ipconfig /all: Prípona DNS podle pripojení . . . : riomail.cz Popis . . . . . . . . . . . . . . : Intel(R) PRO/Wireless 2200BG Network Connection Fyzická Adresa. . . . . . . . . . : 00-0E-35-3D-77-68 Protokol DHCP povolen . . . . . . : Ano Automatická konfigurace povolena : Ano Adresa IP . . . . . . . . . . . . : 192.168.0.5 Maska podsíte . . . . . . . . . . : 255.255.255.0 Výchozí brána . . . . . . . . . . : 192.168.0.1 Server DHCP . . . . . . . . . . . : 192.168.0.1 Servery DNS . . . . . . . . . . . : 94.74.192.252 94.74.192.244 I can ping the repeater, I can ping the root AP, but not a DNS server or any other IP beyond the root AP. Anyone has an idea what's wrong? Thanks, Ondra

    Read the article

  • Bypass BIOS password set by faulty Toshiba firmware on Satellite A55 laptop?

    - by Brian
    How can the CMOS be cleared on the Toshiba Satellite A55-S1065? I have this 7 year old laptop that has been crippled by a glitch in its BIOS: 'A "Password =" prompt may be displayed when the computer is turned on, even though no power-on password has been set. If this happens, there is no password that will satisfy the password request. The computer will be unusable until this problem is resolved. [..] The occurrence of this problem on any particular computer is unpredictable -- it may never happen, but it could happen any time that the computer is turned on. [..] Toshiba will cover the cost of this repair under warranty until Dec 31, 2010.' -Toshiba As they stated, this machine is "unusable." The escape key does not bypass the prompt (nor does any other key), thus no operating system can be booted and no firmware updates can be installed. After doing some research, I found solutions that have been suggested for various Toshiba Satellite models afflicted by this glitch: "Make arrangements with a Toshiba Authorized Service Provider to have this problem resolved." -Toshiba (same link). Even prior to the expiration of Toshiba's support ("repair under warranty until Dec 31, 2010"), there have been reports that this solution is prohibitively expensive, labor charges accruing even when the laptop is still under warranty, and other reports that are generally discouraging: "They were unable to fix it and the guy who worked on it said he couldn’t find the jumpers on the motherboard to clear the BIOS. I paid $39 for my troubles and still have the password problem." - Steve. Since the costs of the repairs can now exceed the value of the hardware, it would seem this is a DIY solution, or a non-solution (i.e. the hardware is trash). Build a Toshiba parallel loopback by stripping and soldering the wires on a DB25 plug to connect connect these pins: 1-5-10, 2-11, 3-17, 4-12, 6-16, 7-13, 8-14, 9-15, 18-25. -CGSecurity. According to a list of supported models on pwcrack, this will likely not work for my Satellite A55-1065 (as well as many other models of similar age). -pwcrack Disconnect the laptop battery for an extended period of time. Doesn't work, laptop sat in a closet for several years without the battery connected and I forgot about the whole thing for awhile. The poor thing. Clear CMOS by setting the proper jumper setting or by removing the CMOS (RTC) battery, or by short circuiting a (hidden?) jumper that looks like a pair of solder marks -various sources for various Satellite models: Satellite A105: "you will see C88 clearly labeled right next the jack that the wireless card plugs into. There are two little solder squares (approx 1/16") at this location" -kerneltrap Satellite 1800: "Underneath the RAM there is black sticker, peel off the black sticker and you will reveal two little solder marks which are actually 'jumpers'. Very carefully hold a flat-head screwdriver touching both points and power on the unit briefly, effectively 'shorting' this circuit." -shadowfax2020 Satellite L300: "Short the B500 solder pads on the system board." -Lester Escobar Satellite A215: "Short the B500 solder pads on the system board." -fixya Clearing the CMOS could resolve the issue, but I cannot locate a jumper or a battery on this board. Nothing that looks remotely like a battery can be removed (everything is soldered). I have looked closely at the area around the memory and do not see any obvious solder pads that could be a secret jumper. Here are pictures (click for full resolution) : Where is the jumper (or solder pads) to short circuit and wipe the CMOS on this board? Possibly related questions: Remove Toshiba laptop BIOS password? Password Problem Toshiba Satellite..

    Read the article

  • Need help with yum,python and php in CentOS. (I made a complete mess!)

    - by pek
    a while back I wanted to install some plugins for Trac but it required python 2.5 I tried installing it (I don't remember how) and the only thing I managed was to have two versions of python (2.4 and 2.5). Trac still uses the old version but the console uses 2.5 (python -V = Python 2.5.2). Anyway, the problem is not python, the problem is yum (which uses python). I am trying to upgrade my PHP version from 5.1.x to 5.2.x. I tried following this tutorial but when I reach the step with yum I get this error: >[root@XXX]# yum update Loading "installonlyn" plugin Setting up Update Process Setting up repositories Reading repository metadata in from local files Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.main(sys.argv[1:]) File "/usr/share/yum-cli/yummain.py", line 94, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 381, in doCommands return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds) File "/usr/share/yum-cli/yumcommands.py", line 150, in doCommand return base.updatePkgs(extcmds) File "/usr/share/yum-cli/cli.py", line 672, in updatePkgs self.doRepoSetup() File "/usr/share/yum-cli/cli.py", line 109, in doRepoSetup self.doSackSetup(thisrepo=thisrepo) File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 338, in doSackSetup self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 200, in populateSack sack.populate(repo, with, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 91, in populate dobj = repo.cacheHandler.getPrimary(xml, csum) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 100, in getPrimary return self._getbase(location, checksum, 'primary') File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 86, in _getbase (db, dbchecksum) = self.getDatabase(location, metadatatype) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 82, in getDatabase db = self.makeSqliteCacheFile(filename,cachetype) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 245, in makeSqliteCacheFile self.createTablesPrimary(db) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 165, in createTablesPrimary cur.execute(q) File "/usr/lib/python2.4/site-packages/sqlite/main.py", line 244, in execute self.rs = self.con.db.execute(SQL) _sqlite.DatabaseError: near "release": syntax error Any help? Thank you. Update OK, so I've managed to update yum hoping it would solve my problems but now I get a slightly different version of the same error: [root@XXX]# yum -y update Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.skiplink.com * base: www.gtlib.gatech.edu * epel: mirrors.tummy.com * extras: yum.singlehop.com * updates: centos-distro.cavecreek.net (process:30840): GLib-CRITICAL **: g_timer_stop: assertion `timer != NULL' failed (process:30840): GLib-CRITICAL **: g_timer_destroy: assertion `timer != NULL' failed Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 309, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 178, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 345, in doCommands self._getTs(needTsRemove) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in _getTs self._getTsInfo(remove_only) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in _getTsInfo pkgSack = self.pkgSack File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 661, in <lambda> pkgSack = property(fget=lambda self: self._getSacks(), File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 501, in _getSacks self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 260, in populateSack sack.populate(repo, mdtype, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 190, in populate dobj = repo_cache_function(xml, csum) File "/usr/lib/python2.4/site-packages/sqlitecachec.py", line 42, in getPrimary self.repoid)) TypeError: Can not create packages table: near "release": syntax error I'm guessing that this "release" thing has something to do with a repository, but I didn't find anything... I went to the sqlitecachec.py at line 42 which writes (line numbers added for convenience): 39: return self.open_database(_sqlitecache.update_primary(location, 40: checksum, 41: self.callback, 42: self.repoid)) Update 2 I think I found the problem. This post suggests that the problem is sqlite and not yum. The version of sqlite I have installed is 3.6.10 but I have no idea which version does python 2.4 uses. ld.so.config contains the following: include ld.so.conf.d/*.conf /usr/local/lib In folder /usr/local/lib I find a symbolic link named libsqlite3.so that points to libsqlite3.so.0.8.6 WHAT IS HAPPENING??????? :S

    Read the article

  • Managing access to multiple linux system

    - by Swartz
    A searched for answers but have found nothing on here... Long story short: a non-profit organization is in dire need of modernizing its infrastructure. First thing is to find an alternatives to managing user accounts on a number of Linux hosts. We have 12 servers (both physical and virtual) and about 50 workstations. We have 500 potential users for these systems. The individual who built and maintained the systems over the years has retired. He wrote his own scripts to manage it all. It still works. No complaints there. However, a lot of the stuff is very manual and error-prone. Code is messy and after updates often needs to be tweaked. Worst part is there is little to no docs written. There are just a few ReadMe's and random notes which may or may not be relevant anymore. So maintenance has become a difficult task. Currently accounts are managed via /etc/passwd on each system. Updates are distributed via cron scripts to correct systems as accounts are added on the "main" server. Some users have to have access to all systems (like a sysadmin account), others need access to shared servers, while others may need access to workstations or only a subset of those. Is there a tool that can help us manage accounts that meets the following requirements? Preferably open source (i.e. free as budget is VERY limited) mainstream (i.e. maintained) preferably has LDAP integration or could be made to interface with LDAP or AD service for user authentication (will be needed in the near future to integrate accounts with other offices) user management (adding, expiring, removing, lockout, etc) allows to manage what systems (or group of systems) each user has access to - not all users are allowed on all systems support for user accounts that could have different homedirs and mounts available depending on what system they are logged into. For example sysadmin logged into "main" server has main://home/sysadmin/ as homedir and has all shared mounts sysadmin logged into staff workstations would have nas://user/s/sysadmin as homedir(different from above) and potentially limited set of mounts, a logged in client would have his/her homedir at different location and no shared mounts. If there is an easy management interface that would be awesome. And if this tool is cross-platform (Linux / MacOS / *nix), that will be a miracle! I have searched the web and so have found nothing suitable. We are open to any suggestions. Thank you. EDIT: This question has been incorrectly marked as a duplicate. The linked to answer only talks about having same homedirs on all systems, whereas we need to have different homedirs based on what system user is currently logged into(MULTIPLE homedirs). Also access needs to be granted only to some machinees not the whole lot. Mods, please understand the full extent of the problem instead of merely marking it as duplicate for points...

    Read the article

  • SQL Server Issue: Could not allocate space for object ... primary filegroup is full

    - by Luke
    Trying to figure out a problem at an office that has SQL Server 2005 installed on Windows SBS Server 2008. Here's the setup: It's an office, and the person who set this all up is nowhere to be found. I'm the best hope they have... One of the programs they use on a workstation gives them an error of "Could not allocate space for object 'Billing' in database "MyDatabase" because primary filegroup is full" when trying to save an entry in their software. I searched around for hours, looking for possible solutions. One was to check for available disk space, and another was to defrag. I checked the hard drives on the server, and there is plenty of space free. I also defragged, which may have helped the problem somewhat. It's hard to say, because it seems like with the nature of the error, if you try over and over you might get it to actually save. My next step was to try to see if autogrowth was enabled on the database. This would seem to be a likely / possible solution, but I can't access the database! If I run the SQL Management Studio, I can log in as my Windows user and view the list of databases. However, if I try to do anything (actually view the database, view the properties, add or edit users), I get errors that I don't have permission. For what it's worth, I also tried runing Management Studio as Administrator, in case that would help. No difference, though. Now, what I'm guessing is going on -- from my limited knowledge of SQL and from reading online -- is that though I'm logged in as a Windows administrator, that account does NOT have SQL access. I do see a list of SQL users, including SA, but I again don't have permission to add one or to change the password on an existing one. And nobody at the office has any idea what the SQL passwords could be. So... here's my thinking thus far: 1 - The "Could not allocate" error likely points to a database that needs to be allowed to autogrow. Especially since I verified there is plenty of free space and the HD has been defragmented. 2 - Enabling autogrow would be very easy to do if I had the proper access within SQL Management Stuido. That leads me to this link: http://blogs.technet.com/b/sqlman/archive/2011/06/14/tips-amp-tricks-you-have-lost-access-to-sql-server-now-what.aspx It sounds like it's a step-by-step guide for giving me the access I need to SQL. I'm guessing that if I followed this guide, I would be able to then log in to the SQL server via Management Studio with the proper permissions, and would be able to enable autogrow (or simply view the status of the existing database), and hopefully solve the "Could not allocate space" problem! So I guess I have a few questions: 1 - Would you guys agree with my "diagnosis"? Think I'm barking up the right tree? 2 - Is there any risk at all in hurting / disabling / wrecking the current SQL database or setup with me going through the guide to regain SQL access? I understand that per the guide, I would have to temporarily shut down SQL, so obviously it wouldn't be accessible during that time. But it wouldn't be worth the risk if there's a chance I could mess anything up... Like I said, the workstations ARE currently accessing the database somehow, but nobody knows with what login info or anything. Basically, it's set up, it works (usually), but if they had to reload the software, nobody would know how. Any feedback would be appreciated!! The problem is such that it's not an emergency for them, but an annoyance. If I could fix it, it would be wonderful. But if not, I think they'll manage, especially as they are going to eventually stop using this software. Thank you so much for your time! Luke

    Read the article

  • PPTP connection fails with errors 800/806

    - by Mark S. Rasmussen
    I've got a client (Server 2008 R2) that won't connect to our production environment PPTP VPN server (Server 2003, running RRAS). The server is behind a firewall that has TCP1723 open as well as GRE. Other clients at our office are able to connect just fine. Our office is behind a Juniper SSG5-Serial firewall, but all outgoing traffic is allowed, and multiple other clients are able to connect to VPN servers without issues. I've also setup a completely different VPN server on another network outside of our office. The functioning clients connect just fine - the Server 2008 R2 machine doesn't. Thus it's definitely a problem with this machine in particular. I've rebooted it. I've disabled the firewall, no dice on either. I've run PPTPSRV and PPTPCLNT on the server/client and they're able to communicate perfectly - indicating there's no problem using neither TCP1723 nor GRE. The Server 2008 R2 machine is also running as a VPN server itself (incoming connection) and that's working perfectly. We have the issues no matter if there are active incoming connections or not. I'm not sure what my next debugging step would be; any suggestions? EDIT: The event log on the server has the following warning from RasMan: A connection between the VPN server and the VPN client xxx.xxx.xxx.xxx has been established, but the VPN connection cannot be completed. The most common cause for this is that a firewall or router between the VPN server and the VPN client is not configured to allow Generic Routing Encapsulation (GRE) packets (protocol 47). Verify that the firewalls and routers between your VPN server and the Internet allow GRE packets. Make sure the firewalls and routers on the user's network are also configured to allow GRE packets. If the problem persists, have the user contact the Internet service provider (ISP) to determine whether the ISP might be blocking GRE packets. Obviously this points to GRE being a potential problem. But seeing as I have other clients connectiong without problems, as well as PPTPSRV and PPTPCLNT being able to communicate, I'm suspecting this might be a red herring. EDIT: Here are the anonymized events logged by the client in chronological order: CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has started dialing a VPN connection using a per-user connection profile named ZZZ. The connection settings are: Dial-in User = XXX\YYY VpnStrategy = PPTP DataEncryption = Require PrerequisiteEntry = AutoLogon = No UseRasCredentials = Yes Authentication Type = CHAP/MS-CHAPv2 Ipv4DefaultGateway = No Ipv4AddressAssignment = By Server Ipv4DNSServerAssignment = By Server Ipv6DefaultGateway = Yes Ipv6AddressAssignment = By Server Ipv6DNSServerAssignment = By Server IpDnsFlags = Register primary domain suffix IpNBTEnabled = Yes UseFlags = Private Connection ConnectOnWinlogon = No. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY is trying to establish a link to the Remote Access Server for the connection named ZZZ using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has successfully established a link to the Remote Access Server using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The link to the Remote Access Server has been established by user XXX\YYY. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY dialed a connection named ZZZ which has failed. The error code returned on failure is 806. Running Wireshark on the client shows it trying and retrying to send a "71 Configuration Request" While the server shows the incoming client requests, but apparently without replying: Given that this is GRE traffic, I think rules out the GRE traffic being blocked. Question is, why doesn't the server reply? This is the Configuration Request the server receives from the non functioning client (meaning no response is sent to the client request): And this is the Configuration Request the server receives from the working client: To me they seem identical, except for differing keys and magic numbers, and the fact that one client receives a response while the other doesn't.

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

  • Netgear VPN endpoint drops connectivity to single IP address

    - by Justin Bowers
    I'm having a strange issue with one of the networks I manage recently. We have about 14 different networks connected together through a Netgear hardware VPN. Everything has been running fine (other than standard connectivity problems) for a few years now, but I've hit a wall with a problem that's just cropped up at one of the VPN endpoint locations. Our primary VPN network is on the 192.168.1.0/24 subnet and our other 13 networks are on the 192.168.2.0/24 - 192.168.14.0/24 subnets. We run a terminal server on the 192.168.1.0/24 network with IP address 192.168.1.100. Starting Thursday of last week, we had a problem with connectivity of the 192.168.2.0/24 network to 192.168.1.100. When troubleshooting the problem, I found that Network 2 (192.168.2.0/24) still had connectivity to the Internet as well as VPN connectivity to Network 1 (192.168.1.0/24). We could ping and connect to any other device other than the server with IP address 192.168.1.100. Also, none of our networks had an issue accessing 192.168.1.100. I ran a scan on Network 2 after assigning static IP addresses to one of the workstations but received no response from 192.168.1.100 (looking for possibly a new device that someone had plugged into Network 2 that had a duplicate IP address with the server). Asking the staff, noone had reported connecting a new device to Network 2 as well. I then assigned a secondary IP address of 192.168.1.88 to the server and could ping and connect to the secondary IP address from Network 2, but still couldn't access it via 192.168.1.100. I then just rebooted the Netgear VPN Firewall (FVS318v3) and after it came back up, connectivity to 192.168.1.100 was restored. Beforehand, when checking for devices with a possible duplicate IP address, I did run a check for available wireless access points and stations and found none (our wireless is secured via MAC address access control through a WG102 device). I thought that it may have been a fluke for some reason since everything came back up after a power cycle of the VPN Firewall. Things ran fine for a few days until this afternoon, when the problem happened again. One of our users claimed that they had connectivity problems to the server and after connecting to the computer, I found that I couldn't ping the server address anymore. I could still ping the alternate IP address of the server though, so I went ahead and rebooted the VPN firewall again and connectivity was restored. Unfortunately, I can't find anything in the security or VPN logs of the firewall that helps point me in the right direction, so I thought I would go ahead and ask to see if anyone else has any other insight into why we've started having this problem. I am aware that it could still be a device with a duplicate IP address of the server on Network 2, but every employee claim states that there's been no such new device brought in to the network. I know this is a long read, but any help is appreciated! Thanks, Justin

    Read the article

  • Network Restructure Method for Double-NAT network

    - by Adrian
    Due to a series of poor network design decisions (mostly) made many years ago in order to save a few bucks here and there, I have a network that is decidedly sub-optimally architected. I'm looking for suggestions to improve this less-than-pleasant situation. We're a non-profit with a Linux-based IT department and a limited budget. (Note: None of the Windows equipment we have runs does anything that talks to the Internet nor do we have any Windows admins on staff.) Key points: We have a main office and about 12 remote sites that essentially double NAT their subnets with physically-segregated switches. (No VLANing and limited ability to do so with current switches) These locations have a "DMZ" subnet that are NAT'd on an identically assigned 10.0.0/24 subnet at each site. These subnets cannot talk to DMZs at any other location because we don't route them anywhere except between server and adjacent "firewall". Some of these locations have multiple ISP connections (T1, Cable, and/or DSLs) that we manually route using IP Tools in Linux. These firewalls all run on the (10.0.0/24) network and are mostly "pro-sumer" grade firewalls (Linksys, Netgear, etc.) or ISP-provided DSL modems. Connecting these firewalls (via simple unmanaged switches) is one or more servers that must be publically-accessible. Connected to the main office's 10.0.0/24 subnet are servers for email, tele-commuter VPN, remote office VPN server, primary router to the internal 192.168/24 subnets. These have to be access from specific ISP connections based on traffic type and connection source. All our routing is done manually or with OpenVPN route statements Inter-office traffic goes through the OpenVPN service in the main 'Router' server which has it's own NAT'ing involved. Remote sites only have one server installed at each site and cannot afford multiple servers due to budget constraints. These servers are all LTSP servers several 5-20 terminals. The 192.168.2/24 and 192.168.3/24 subnets are mostly but NOT entirely on Cisco 2960 switches that can do VLAN. The remainder are DLink DGS-1248 switches that I am not sure I trust well enough to use with VLANs. There is also some remaining internal concern about VLANs since only the senior networking staff person understands how it works. All regular internet traffic goes through the CentOS 5 router server which in turns NATs the 192.168/24 subnets to the 10.0.0.0/24 subnets according to the manually-configured routing rules that we use to point outbound traffic to the proper internet connection based on '-host' routing statements. I want to simplify this and ready All Of The Things for ESXi virtualization, including these public-facing services. Is there a no- or low-cost solution that would get rid of the Double-NAT and restore a little sanity to this mess so that my future replacement doesn't hunt me down? Basic Diagram for the main office: These are my goals: Public-facing Servers with interfaces on that middle 10.0.0/24 network to be moved in to 192.168.2/24 subnet on ESXi servers. Get rid of the double NAT and get our entire network on one single subnet. My understanding is that this is something we'll need to do under IPv6 anyway, but I think this mess is standing in the way.

    Read the article

  • Unable to Mange DNS via MMC

    - by IT Helpdesk Team Manager
    When trying to access the DNS service on Microsoft Windows Server 2003 (Build 3790) domain controller/schema master via the MMC DNS snap in or locally via the DNS MMC from Administrative tools I'm getting a red "X" through the icon for the DNS Server. The inability to access DNS management via MMC happens on all domain controllers as well. We've looked at items such as the DHCP client not being started, incorrect DNS setup ( the machine points at itself and another DC ), the DNS service not running ( it is and all DNS queries via NSLOOKUP work correctly ), dslint returns the correct information and functions as expected. There is the following entry in the DNS event log: The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 0000051b dnscmd fails with RPC server unavailable yet RPC is started: C:\Documents and Settings\Administrator.DOMAIN>dnscmd /Info Info query failed status = 1722 (0x000006ba) Command failed: RPC_S_SERVER_UNAVAILABLE 1722 (000006ba) DCDIAG /TEST:DNS /V /E produces the following errors: Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1753 (Type: Win32 - Description: There are no more endpoints available from the endpoint mapper.)] Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1722 (Type: Win32 - Description: The RPC server is unavailable.)] The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. A DNS query for _ldap._tcp.dc._msdcs. returns the correct results. All domain and ADS related activities are working except that I can't manage my DNS via MMC or dnscmd. Any thoughts or solutions would be greatly appreciated. EDIT: Adding Registry export per request: Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc Class Name: <NO CLASS> Last Write Time: 10/18/2012 - 2:29 PM Value 0 Name: DCOM Protocols Type: REG_MULTI_SZ Data: ncacn_ip_tcp Value 1 Name: UuidSequenceNumber Type: REG_DWORD Data: 0xb19bd0f Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\ClientProtocols Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: ncacn_np Type: REG_SZ Data: rpcrt4.dll Value 1 Name: ncacn_ip_tcp Type: REG_SZ Data: rpcrt4.dll Value 2 Name: ncadg_ip_udp Type: REG_SZ Data: rpcrt4.dll Value 3 Name: ncacn_http Type: REG_SZ Data: rpcrt4.dll Value 4 Name: ncacn_at_dsp Type: REG_SZ Data: rpcrt4.dll Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NameService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: DefaultSyntax Type: REG_SZ Data: 3 Value 1 Name: Endpoint Type: REG_SZ Data: \pipe\locator Value 2 Name: NetworkAddress Type: REG_SZ Data: \\. Value 3 Name: Protocol Type: REG_SZ Data: ncacn_np Value 4 Name: ServerNetworkAddress Type: REG_SZ Data: \\. Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NetBios Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\RpcProxy Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: Enabled Type: REG_DWORD Data: 0x1 Value 1 Name: ValidPorts Type: REG_SZ Data: pdc:100-5000 Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\SecurityService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: 9 Type: REG_SZ Data: secur32.dll Value 1 Name: 10 Type: REG_SZ Data: secur32.dll Value 2 Name: 14 Type: REG_SZ Data: schannel.dll Value 3 Name: 16 Type: REG_SZ Data: secur32.dll Value 4 Name: 1 Type: REG_SZ Data: secur32.dll Value 5 Name: 18 Type: REG_SZ Data: secur32.dll Value 6 Name: 68 Type: REG_SZ Data: netlogon.dll

    Read the article

  • How to prevent delays associated with IPv6 AAAA records?

    - by Nic
    Our Windows servers are registering IPv6 AAAA records with our Windows DNS servers. However, we don't have IPv6 routing enabled on our network, so this frequently causes stall behaviours. Microsoft RDP is the worst offender. When connecting to a server that has a AAAA record in DNS, the remote desktop client will try IPv6 first, and won't fall back to IPv4 until the connection times out. Power users can work around this by connecting to the IP address directly. Resolving the IPv4 address with ping -4 hostname.foo always works instantly. What can I do to avoid this delay? Disable IPv6 on client? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Too many clients to ensure this is set everywhere consistently. Will cause more problems later when we finally implement IPv6. Disable IPv6 on the server? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Requires an inconvenient registry hack to disable the entire IPv6 stack. Ensuring this is correctly set on all servers is inconvenient. Will cause more problems later when we finally implement IPv6. Mask IPv6 records on the user-facnig DNS recursor? Nope, we're using NLNet Unbound and it doesn't support that. Prevent registration of IPv6 AAAA records on the Microsoft DNS server? I don't think that's even possible. At this point, I'm considering writing a script that purges all AAAA records from our DNS zones. Please, help me find a better way. UPDATE: DNS resolution is not the problem. As @joeqwerty points out in his answer, the DNS records are returned instantly. Both A and AAAA records are immediately available. The problem is that some clients (mstsc.exe) will preferentially attempt a connection over IPv6, and take a while to fall back to IPv4. This seems like a routing problem. The ping command produces a "General failure" error message because the destination address is unroutable. C:\Windows\system32>ping myhost.mydomain Pinging myhost.mydomain [2002:1234:1234::1234:1234] with 32 bytes of data: General failure. General failure. General failure. General failure. Ping statistics for 2002:1234:1234::1234:1234: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), I can't get a packet capture of this behaviour. Running this (failing) ping command does not produce any packets in Microsoft Network Monitor. Similarly, attempting a connection with mstsc.exe to a host with an AAAA record produces no traffic until it does a fallback to IPv4. UPDATE: Our hosts are all using publicly-routable IPv4 addresses. I think this problem might come down to a broken 6to4 configuration. 6to4 behaves differently on hosts with public IP addresses vs RFC1918 addresses. UPDATE: There is definitely something fishy with 6to4 on my network. When I disable 6to4 on the Windows client, connections resolve instantly. netsh int ipv6 6to4 set state disabled But as @joeqwerty says, this only masks the problem. I'm still trying to find out why IPv6 communication on our network is completely non-working.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >