Search Results

Search found 20796 results on 832 pages for 'active window'.

Page 426/832 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • Fixing SharePoint 2010 Permission Problems on Windows 7

    - by Ricardo Peres
    I had a tough time trying to have SharePoint working perfectly on a Windows 7 development machine that was occasionally disconnected from the Active Directory (when I am home I must connect through a VPN). I mostly had problems with service applications such as User Profile, Managed Metadata, Business Connectivity Services and the like, and all I knew were cryptical messages such as “access denied” or “the service or application pool is not started”. I was sure that both the services and application pools were running under a domain account that had proper permissions on the SQL Server instance, and basically it was a fresh installation. Lots of people are having the same problem, apparently. After banging my head against the wall for several days, I remembered about farm (what I had) versus stand-alone (which I had never tried) installations. Bingo! Here’s what I did: I dropped all SharePoint databases and logins and reinstalled SP from scratch, only this time not in farm mode, but as stand-alone. After the SharePoint Configuration Wizard started, I cancelled it and started the Management Shell. I created the configuration database manually by using the New-SPConfigurationDatabase cmdlet where I specified a local account – something that the Configuration Wizard wouldn’t allow me to do. Then I restarted the Configuration Wizard and everything began working perfectly! Yes, I got some pre-configured service applications and also some content which I didn’t need, but I realized it was possible to drop and recreate everything the way I wanted to. All services and application pools are now running under local accounts, which is fine for my development needs. Really, Microsoft… I hope this will bring light to someone facing the same problems!

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • Delivering Oracle DBaaS: Journey to Enterprise Cloud with Oracle Database 12c

    - by B R Clouse
    The release of Oracle Database 12c  is accompanied by extensive supporting collateral that details the new features and options of this major release.  But with so much to read and investigate, where to start?  If you don't have the time to pore through everything, then you may wish organize your reading in terms of the use case you're most interested in.  If your interest is Database as a Service in private database clouds then may I suggest that you start here: Accelerate the Journey to Enterprise Cloud with Oracle Database 12c Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This paper describes the phases of the journey to enterprise cloud, and enumerates the new features and options in Oracle Database 12c that support each phase.  Oracle Multitenant figures prominently, but it's not the only cloud-enabling topic: Oracle Database Quality of Service Management, Application Continuity, Automatic Data Optimization, Global Data Services and Active Data Guard Far Sync all deliver key benefits for delivering database as a service.  Further reading and research is suggested by the references included in the paper. Happy clouding!

    Read the article

  • Scripts Casing Flash Intro Animation To Stop [migrated]

    - by ubique
    When my Flash website loads, it freezes halfway through the initial animation for 2-3 seconds and then continues. This obviously doesn't look great and I can't figure out what is causing it. Am thinking it is one of the scripts in index.html causing the issue and have tried all sorts of ways to correct it - what have I done wrong? <!DOCTYPE html> <html lang="en"> <head> <title>company name</title> . . . <link href="style.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="js/flashobject.js"></script> <!--[if lt IE 7]> <link href="ie6.css" rel="stylesheet" type="text/css" /> <![endif]--> </head> <body> <header> <hgroup> <h1>company</h1> <h2>company</h2> </hgroup> </header> <div id="container"> <div id="head"> <div class="aligncenter"><a href="http://www.adobe.com/go/EN_US-H-GET-FLASH"> <img src="http://www.adobe.com/images/shared/download_buttons/get_adobe_flash_player.png" alt="" /></a> </div> </div> </div> <div class="g-plus" data-href="https://plus.google.com/100925740920754223119?rel=publisher" data-width="170" data-height="69" data-theme="light"> </body> <!-- Flash --> <script type="text/javascript"> var fo = new FlashObject("main_v10.swf", "head", "100%", "100%", "8", ""); fo.addParam("quality", "high"); fo.addParam("allowFullScreen", "true"); fo.write("head"); </script> <!-- Hello Bar --> <script type="text/javascript" src="//www.hellobar.com/hellobar.js"></script> <script type="text/javascript"> new HelloBar(39040,52484); </script> <!-- GPlus --> <script type="text/javascript"> window.___gcfg = {lang: 'en'}; (function() {var po = document.createElement("script"); po.type = "text/javascript"; po.async = true;po.src = "https://apis.google.com/js/plusone.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(po, s); })();</script> <!-- Google --> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxxxx-1']); _gaq.push(['_setSiteSpeedSampleRate', 10]); _gaq.push(['_trackPageview']); (function init() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga,s); })(); window.onload = init; </script> </html>

    Read the article

  • Configuration tools for multiple monitors for X / Linux

    - by richard
    I have Ubuntu 10.04 running gnome and two monitors. I am wondering if a can get a better multi-monitor configuration tool. The one I have, gnome-display-properties, has too many problems, including: When I swapped my monitors over, the narrower (external) one now on the left. There is a width calculation error, such that I have a virtual monitor the width of the wide-monitor on the narrow-monitor and part of the wide monitor. And a virtual narrow-monitor on the remainder of the wide-monitor. Also the visible mouse pointer does is not aligned with the active spot, an x offset of one monitor width. I would like, in approximate order of importance: nobugs. to be able to select which is primary monitor. to have multiple configurations. configurations to be automatically selected based on which monitors are attached. configurations to be cycled (reliably) when display mode key is pressed. when a display is deactivated, for windows to migrate to remaining monitors. option to not change display resolution when mirroring, but to use side/top blanking bars to pad out screen.

    Read the article

  • is it possible to execute keyboard input programmaticly in linux?

    - by Taylor Hawkes
    For example is there a Linux command or way that I could from a program (c++ | python| or other) enter a serious of keyboard inputs that are interpreted as though they are keyboard inputs. I have a bad case of RSI from typing. To ease my pain I developed a voice controlled interface using pocket sphinx and a custom grammar and to run a number of very common commands. ex: "open chrome" , "open vim". basically what is shown here, but with slightly diff tools: http://bloc.eurion.net/archives/2008/writing-a-command-and-control-application-with-voice-recognition/ I have run into some limitation as I can only execute command line commands given a voice command. Rather than having a "voice command" - "command line command" mapping I would like to have "voice command" - "keyboard input" mapping. So when my active window is a browser and i type + n, and new tab opens. If i'm in vim and new vim tab opens. Any suggestions, ideas, tools or approaches to this problem would be much appreciated. I understand the answer may not be simple, but would like to develop it none the less.

    Read the article

  • DISA Cross Domain Enterprise Solutions on the NetBeans Platform

    - by Geertjan
    Bray 2.0 is a tool based on the NetBeans Platform that assists in creating valid Data Flow Configuration (DFC) files. The DFC Specification was developed to provide a standardized way for defining, validating, and approving data flows for use on cross-domain guarding solutions. A DFC document specifies key entities such as security domains, guards that facilitate data between security domains, data flows that describe how data travels between security domains, filters that transform and validate the data and more. Related info: http://www.disa.mil/Services/Information-Assurance/Cross-Domain-Solutions The Bray product is in development at Fulcrum IT (http://www.fulcrumco.com). The DFC Specification and Bray were developed in support of the US Department of Defense. Bray 2.0 marks the first release of Bray on the NetBeans Platform and utilizes a number of features that are core to the NetBeans Platform: Modular plugability. Bray consumers can integrate their own tools, file types, and more into the product with relative ease. Robust UI. The NetBeans Platform intuitive UI makes it easy to access and manipulate multiple aspects of a DFC. Explorer. The Explorer is a key component that makes the DFC XML easy to traverse, edit, and find errors. Context-sensitive help. JavaHelp can be readily integrated for the product as well as all the UI within. Editors. Any external file can be added to a DFC. Users can register their own editors or use the provided NetBeans editors to edit files. Printing. The NetBeans Platform Print API makes it easy to determine what should be printed and how.   A screenshot: Bray 2.0 provides a lot of key features in developing valid, robust DFC files:  XML validation. A DFC can be validated against the DFC schema specification. DFC Check List. An interactive, minimal guide for creating a complete DFC. Summary Window. The Summary Window functions like the Navigator in NetBeans IDE. The current "item of interest" is checked against various business rules and provides the ability to quickly find and fix errors. Change Log. Bray audits every change to a DFC and places them in a change log for users to peruse. Comments. Users can optionally add comments for other users to see. Digital signatures. DFC files can be digitally signed. A signature history and signature validation is provided in Bray. Pluggable security schemes. Bray ships with plain text and IC-ISM security schemes. If needed, users can integrate additional ones.  ...and more to come! New features for Bray are constantly in development including use of the NetBeans Visual Library, language support, and more. More screenshots:

    Read the article

  • How to open the JavaScript console in different browsers?

    - by Šime Vidas
    Updated on October 7th 2012 Chrome: Press either CTRL + SHIFT + J to open the "Console" tab of the Developer Tools. Alternative method: Press either CTRL + SHIFT + I or F12 to open the Developer Tools. Press ESC (or click on "Show console" in the bottom right corner) to slide the console up. Note: In Chrome's dev tools, there is a "Console" tab. However, a smaller "slide-up" console can be opened while any of the other tabs is active. Safari: Press CTRL + ALT + I to open the Web Inspector. See Chrome's step 2. (Chrome and Safari have pretty much identical dev tools.) Note: Step 1 only works if the "Show Develop menu in menu bar" check box in the Advanced tab of the Preferences menu is checked! IE9: Press F12 to open the developer tools. Click the "Console" tab. Firefox: Press CTRL + SHIFT + K to open the Web console. or, if Firebug is installed (recommended): Press F12 to open Firebug. Click on the "Console" tab. Opera: Press CTRL + SHIFT + I to open Dragonfly. Click on the "Console" tab.

    Read the article

  • Mailbox move issue from Exchange 2003 to Exchange 2010

    - by Ryan Roussel
    Today while moving mailboxes between Exchange 2003 and Exchange 2010, I hit an issue with a couple of mailboxes.  These mailboxes all popped access denied errors or more exactly: Insufficient Access Rights to perform the operation.   The cause was similar to the mail flow issue in that inheritable permissions were not turned on for the user object in Active Directory.  This also presented it’s own unique problem in that since the initial move request failed because of permissions, it had to be cleared before a new move request could be created. On top of that, the request did not show up in the EMC.  I used the following process to clear the request, assign permission, then create a new request:   1. First you need to know the ExchangeGUID of the mailbox for the remove-moverequest command.  To quickly get the GUID for a mailbox simply run:         2. Next we need to clear out the move request using PowerShell by running: [PS] c:\>Remove-moverequest -moverequestqueue "mailbox database 1030639620" -mailboxguid 8525686f-d4d3-42b7-92f1-46d77ea841a3   3. Then to re-establish inheritable permissions. This can be done by using AD Users and Computers, switching to View Advanced Features, then under the Security tab of the object.  Click Advanced, then check “allow inheritable permissions of parent to propagate to this object”   4. Once the Inheritable permissions are restored, we need to create a new move request: NOTE:  The EMC can also be used to initiate the Move Request once the permissions are corrected. [PS] c:\>New-moverequest –identity jyoung  -baditemlimit 100 -targetdatabase "mailbox database 1030639620"   And that’s it.  The mailbox should move over smoothly with no access denied error.

    Read the article

  • Read On Phone Pushes Data from Your Desktop to the Appropriate Android App

    - by ETC
    Read On Phone is a free Android application that intelligently pushes data to your phone from your bowser. Rather than simply opening the URL on your phone, it opens the appropriate application for the task and formats text. Most send-to-phone type tools simply take the URL of the web page you’re looking at on your computer and shuttle it to your phone. Read On Phone is a more active and effective tool. When you send a page that is text, it formats the text for easy reading on your phone. When you send a YouTube video, map, or telephone number, it opens up the appropriate tool on your phone such as your YouTube viewer, Google Maps, or your phone dialer. In addition to that handy functionality Read On Phone also includes adjustments for day and night reading, font size, auto-scrolling, and pagination. Read On Phone is available as both a Chrome extension and as a bookmarklet for cross-browser use. Hit up the link below for additional information. Read On Phone Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Read On Phone Pushes Data from Your Desktop to the Appropriate Android App MetroTwit is a Sleek Native Twitter Client for Your Windows System Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices

    Read the article

  • Thank you South Florida for a successful SPSouthFLA

    - by Leonard Mwangi
    I wanted to officially thank the organizers, speakers, volunteers and the attendees of SharePoint Saturday South Florida. Being the first event in South Florida the reception was phenomenon and the group of speakers from keynote by Joel Oleson to session’s speakers from well renowned speakers like John Holliday, Randy Disgrill, Richard Harbridge, Ameet Phadnis, Fabian Williams, Chris McNulty, Jaime Velez to organizers like Michael Hinckley amongst others. With my Business Intelligence (BI) presentation being on the last track of the day, I spent very quality time networking with these great guys and getting the insider scope on International SharePoint Community from Joel and his son which was mesmerizing. I had a very active audience to a point where we couldn’t accommodate all the contents within the 1hr allocated time because they were very engaged and wanted a deep dive session on news features like PowerPivot, enhancements on PerformancePoint, Excel Services amongst others in order to understand the business value and how SharePoint 2010 is making the self-service BI become a reality. These community events allows the attendees experience technology first hand and network with MVPs, authors, experts providing high quality educational sessions usually for free which is a reason to attend. I have made the slides for my session available for download for those interested http://goo.gl/VaH5x

    Read the article

  • Four Proven Advantages of Online Learning | Outside Cost, Accessibility or Flexibility

    - by Mohit Phogat
    Coursera believes that online courses complement and supplement traditional education (versus a common misconception online will “replace” traditional.) Our research shows that Coursera’s platform, when used concurrently with a traditional classroom setup, is ideal for “blended learning” (i.e., students watch lectures pre-class, then class-time focuses on interactive work and discussion.) Additionally, we agree with Brad Zomick of SkilledUp—an online learning aggregator—who acknowledges an online course “isn’t an alternative at all but rather a different path with its own rewards.” The advantages of Coursera and our apps for mobile were straightforward and conspicuous from the start: we’re free, open, and flexible to learners’ unique needs and style. Over the past two years, however, the evidence proves there are many more tangible benefits to open, online learning. In SkilledUp’s “The Advantages of Online Courses [Infographic]”–crafted from findings of leading educational research–four observations stand out from the overt characteristics: Speedier Learning - “Research shows that online students achieve same or better learning results in about half the time as those in traditional courses” More Active, Engaged & Motivated - Learners thrive “when working with coursework that is challenging but within their capacity to master.” Tangible Skill Building - with an “improved attitude toward learning” Better Teaching Quality - Courses are taught by experts, with various multimedia and cutting-edge technology, and “are usually better organized than traditional courses” This is only the beginning, Courserians! Everyday we hear your incredible stories on how open online courses enrich your lives and enhance your careers. Meanwhile we study the steady stream of scientific, big-data research proving their worth on a large-scale (such as UPenn’s latest research on the welcomed diversity in Coursera-hosted Wharton MBA courses.) Our motto “Learning without Limits” reminds us that open, online courses give tremendous opportunity to those that might not otherwise have access (or time, or money) to study at a high-caliber institution. Source: Coursera

    Read the article

  • Star Trek inspired home automation visualisation

    - by Zak McKracken
    I’ve always been a more or less active fan of Star Trek. During the construction phase of my house I started coding a GUI for controlling the house which has an EIB. Just for fun I designed a version inspired by the LCARS design used in Star Trek TNG and showed this to my wife. I showed her several designs before but this was the only one, she really liked. So I decided to go on with this. I started a C# WinForms application. The software runs on a wall mounted Shuttle Barebone-PC. First plan was an industrial panel-pc but the processor was too slow. The now-used Atom is ok. I started with the LCARS-controls found on Codeproject. Since the classic LCARS design divides the screen into two parts this tended to be impracticable, so I used my own design For now the software is able to: Switch lights/wall outlets Show current temperatures for all room controllers Show outside temperature with a 24h trend chart Show the status of the two heat pumps Provide an alarm clock (e.g. for cooking) Play internet radio streams Control absence Mute the door bell Speak status messages via speech synthesis For now, I’m working on an integration of my electric meter. The main heat pump and the electric meter are connected to my LAN. I also tried some speech recognition, but I’ve problems with the microphone. I't’s working when you are right in front of the PC, but not far away, let’s say on the other side of the room. So this is the main view. The table displays raw values which are sent over the EIB – completely useless but looks great For each floor I have a different view. Here you can see the temperatures and check the status of the lights (the buttons are blinking when a light is switched on) This is the view for the heat pump:   Next step would be to integrate a control of my squeezebox server (I use different Squeezeboxes through the house as a multiroom audio solution)

    Read the article

  • Reflecting on 2010 and Looking into 2011

    - by Sam Abraham
    In early 2010, I had blogged and shared my excitement as I was about to embark on a new journey relocating to South Florida.     As I settled down and adjusted to my new life, I was presented with an opportunity to get actively involved and volunteer in the local Florida .Net and Project Management communities.  I have since devoted a significant portion of my time to community initiatives, coordinating the West Palm Beach .Net User Group, volunteering as a member of the INETA Speaker’s Bureau and traveling to attend/speak at .Net code camps and user groups throughout the states of Florida and New York. I have also taken on various volunteer roles at the South Florida Chapter of the Project Management Institute starting as core team member on the chapter’s mentoring initiative and ending the year as Project Manager of the chapter’s mentoring program and as Director of Electronic Communications on the chapter’s IT team. I am also serving a one year term (2010-2011) as secretary and founding board member of Florida’s first official chapter of the International Association for Software Architects (IASA).   A big thank you is due for those who afforded me the opportunity and privilege to take part of these initiatives and those who provided guidance and encouragement when I needed them the most.   Looking ahead into 2011, I hope to continue my community involvement and volunteer activities. I will start by dedicating the first 5 weekends in the New Year to teach a free comprehensive Microsoft PowerPoint class at church. My goal will be to start from scratch and slowly cover the various available PowerPoint features that can be leveraged to create captivating presentations. Starting February, I will be resuming my user group/code camp speaking engagements at our South Florida .Net Code Camp and the West Palm Beach .Net User Group.   I look forward to continuing to meet, chat and share with our technical community members and to another active year in community service.   All the best, --Sam Abraham

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Show Notes: Debra Lilley on Fusion Applications

    - by Bob Rhubart
    The latest ArchBeat program features a three-part interview with Oracle ACE Director Debra Lilley (ACE Profile). Debra is Oracle Alliance Director at Fujitsu, Executive Member at the International Oracle Users Group Community (IOUG), Director and Deputy Chair at the UK Oracle Users Group (UKOUG), and a partner at Oracle UK.  So yeah, she’s connected.  In this interview Debra talks about her connection to Oracle Fusion Applications. Listen to Part 1 Debra talks about her role as the as the Director and Deputy Chairperson of the UKOUG and about the UKOUG development group’s involvement in Oracle Fusion Applications. Listen to Part 2 (March 9) Debra shares her insight into what Fusion Applications will bring to Enterprise Architecture, and the importance of user experience in enterprise architecture. Listen to Part 3 (March 16) Debra discusses the need to  close the gap between IT and business, and about how business users should be able to use applications without having to think about the underlying technology. Debra is very active in social networks, so if you have questions or comments you can connect with her via the following: Blog: http://www.debrasoracle.blogspot.com/ Twitter: @debralilley LinkedIn:  http://uk.linkedin.com/pub/debra-lilley/1/438/bba And if you’d like to learn more about Oracle Fusion Applications: http://www.oracle.com/us/products/applications/fusion/index.html Coming Soon Dr. Frank Munz, author of Middleware and Cloud Computing: Oracle Fusion Middleware on Amazon Web Services and Rackspace Cloud.  Andy MacMillan (VP, Enterprise 2.0, Oracle) on the socialization of the enterprise. A panel discussion on “Who gets to be a software architect?” Stay tuned: RSS Technorati Tags: oracle,fusion applications,enterprise architecture,IOUG,UKOUG del.icio.us Tags: oracle,fusion applications,enterprise architecture,IOUG,UKOUG

    Read the article

  • Implications on automatically "open" third party domain aliasing to one of my subdomains

    - by Giovanni
    I have a domain, let's call it www.mydomain.com where I have a portal with an active community of users. In this portal users cooperate in a wiki way to build some "kind of software". These software applications can then be run by accessing "public.mydomain.com/softwarename" I then want to let my users run these applications from their own subdomains. I know I can do that by automatically modifying the.htaccess file. This is not a problem. I want to let these users create dns aliases to let them access one specific subdomain. So if a user "pippo" that owns "www.pippo.com" wants to run software HelloWorld from his own subdomains he has to: Register to my site Create his own subdomain on his own site, run.pippo.com From his DNS control panel, he creates a CNAME record "run.pippo.com" pointing to "public.mydomain.com" He types in a browser http://run.pippo.com/HelloWorld When the software(that is physically run on my server) is called, first it checks that the originating domain is a trusted one. I don't do any other kind of check that restricts software execution. From a SEO perspective, I care about Google indexing of www.mydomain.com but I don't care about indexing of public.mydomain.com What are the possible security implications of doing this for my site? Is there a better way to do this or software that already does this that I can use?

    Read the article

  • Spotlight on an ACE: Edwin Biemond

    - by jeckels
    Edwin Biemond is an active member of the ACE community, having worked with Oracle's development tooling and database technologies since 1997. Since then, Edwin has become an expert in many of Oracle's middleware technologies as well, including WebLogic and SOA. In fact, Edwin has become so prolfic that he was named the Java Developer of the Year in 2009. Edwin hails from the Netherlands, where he is an architect at the company Amis, and is also a co-author of the OSB Development Cookbook. He's a proven expert in ADF, JSF, messaging (Edifact / ebXML), Enterprise Service Bus, web services and tuning of application servers and databases. Recently, Edwin posted a blog on the road map of WebLogic 12c, going over salient features and what the future looks like for Fusion Middleware and the Application Server areas - it's well worth a read, so give it a look. A snippet: WebLogic 12.1.3 will be the first version for many FMW 12c products like Oracle SOA Suite 12c and probably come in one big jar. 12.1.3 & 12.1.4 will add extra features and improvements to Elastic JMS & Dynamic Clusters. Elastic JMS in 12.1.3 will support Server Migration so you can’t lose any JMS messages. In 12.1.4, Dynamic Clusters will have support for auto-scaling based on thresholds based on user-defined metrics. WebLogic 12.1.4 will also have an API to control the Dynamic Clusters, this way we can easily program when to stop, start or remove nodes from a dynamic cluster. Further, Edwin is hosting a session on getting your FMW environment up and running in less than 10 minutes using popular tooling to configure and manage the many FMW components you have in your technology stack. Register now for this virtual developer day to see more. We thank Edwin for his commitment to being an ACE, his work on his blog, his social media publishing and his overall commitment to helping other technologists be even more successful with Oracle products. Follow Edwin on his blog, Twitter, Facebook, LinkedIn, or read his ACE Profile

    Read the article

  • Why is Samba Access from Windows So Slow?

    - by swalker2001
    I have set up a file server using Ubuntu 12.04 Server. The purpose is to serve several network drives to Windows users that have heretofore been served by numerous NAS drives. I have Samba set up with one share defined so far. I can connect to it fine from my test Windows 7 and Windows XP machines. When I do a directory listing on the share from Windows, it can take up to two minutes to get all the files listed--would have taken about 1.5 seconds when I was using the Buffalo NAS. Sometimes it times out with no response at all. I have used the default smb.conf and simply added the following for the share I have set up so far: [engineering] comment = Ubuntu File Server Share path = /networkdriveshares/engineering browsable = yes guest ok = yes read only = no create mask = 0755 I have tried changing the workgroup setting to the Active Domain name our Windows computer use but didn't notice any difference. The only other change I made to the default smb.conf was adding in the recommended socket settings: SO_RCVBUF=8192 SO_SNDBUF=8192 socket options = TCP_NODELAY Lots of information about slow Samba shares online but I have tried all of the solutions I have found and none have made a lick of difference. If there is no solution, is there a better way to set up a file server to be used by Windows clients?

    Read the article

  • Installer gets stuck with a grayed out forward button.

    - by TRiG
    I have a CD with Ubuntu 10.10 and a laptop with Ubuntu 8.10. The laptop had all sorts of crud on it, and anything I wanted to keep was backed up on an external drive, so I was happy to do a wipe and reinstall instead of an update. So after a bit of faffing about trying to work out how to get the thing to boot from the CD drive, I did that. So the screen comes up with the choice: the options are Try Ubuntu and Install Ubuntu. I choose to install and to overwrite my current installation. So far so good. I then get a progress bar labelled something like copying files (I forget the exact wording) and further options to fill in for my location, keyboard locale, username and password. On each of these screens there are forward and back buttons. On the last screen (password), the forward button is greyed out. Well, I think to myself, no doubt it will become active when that copying files progress bar completes. The progress bar never completes. It hangs. And the label changes from copying files to the chirpy ready when you are. The forward button remains greyed out. The back button is as unhelpful as you'd expect it to be. And there's nothing else to click. We have reached an impasse. I tried restarting the laptop, to test whether it actually was properly installed. It wasn't. I tried to run Ubuntu live from the CD, to test whether the disk was damaged. That wouldn't work either, but I suspect it's just because the laptop is old and has a slow disk drive. I'm typing this question on another computer using the Ubuntu live CD and it's working fine. So there's nothing wrong with the CD.

    Read the article

  • Interesting SSMS issue with waittype of PREEMPTIVE_OS_LOOKUPACCOUNTSID

    - by steveh99999
    Saw a recent issue with SQL2008 sp 1 where SQL Server Management Studio (SSMS) appeared to hang when a DBA expanded the database users tab of a database… ie like this :-   When we looked at the waittype of the SSMS session - via sys.dm_os_waiting_tasks– it was waiting on PREEMPTIVE_OS_LOOKUPACCOUNTSID. I’d not come across this waittype before, and there was very little information out on the web for it.. Looking at this issue using SQL Profiler – it them became apparent that SSMS was waiting on the function SUSER_SNAME. I did a quick test using SET STATISTICS TIME to prove it was the SUSER_SNAME function causing our issue :- SELECT * FROM sys.database_principals   -- completes in 6 milliseconds SELECT SUSER_SNAME(sid),* FROM sys.database_principals  -- completes in 188941 milliseconds. suser_sname validates a sid against Active Directory.. What I haven’t mentioned so far is that the database in question had been restored from a server on a different domain. This domain was untrusted from the current domain. However, it appeared that SSMS was still trying to validate the Windows login of each user when displaying the users list…. and taking a long time to return NULL for each user from the untrusted domain. When all users that had a login from the untrusted domain are removed from the database – this fixed the issue. To find out the users that had this issue :- SELECT name FROM sys.database_principals WHERE type_desc IN(‘WINDOWS_USER’, ‘WINDOWS_GROUP’) and SUSER_SNAME(sid) IS NULL Although I have not raised this as a case with MS – I do think this is a bug in SSMS – I do not see why windows logins should need to be validated when displaying only user details..

    Read the article

  • Modernising settings, packages

    - by Sam Brightman
    The update manager (possibly combined with the janitor) does a reasonable job of bringing packages up to date with a new release, removing ones that are replaced by different projects etc. However, I'm left with the lingering feeling that quite a few settings are lingering from old releases. For example, some packages may be left around that I installed myself whereas now the functionality is provided by default. Another example is that my user doesn't get the new theme, and the panel bar is a mess. I can compare against an inactive user on the same system: everything seems tidier. There are also things like the explosion of System Preferences, user groups (inactive user, more recently created, is in groups that the older, active user isn't). In other areas (e.g. default font) I do seem to get given the new defaults. Another example is Spotlight-equivalent search. I remember Beagle and Tracker, I remember removing tracker when it used all system RAM and swap for 2 entire release cycles, but I don't know what I'm "supposed" to be using now. Is there even a default indexing-search installed and exposed? aptitude install ubuntu-desktop doesn't do anything, so the basics are in place package-wise. Is there any way to update my settings to the modern "Ubuntu way" without reinstalling from scratch? Can I do so selectively i.e. show the differences? Most of the time package management on Linux is an absolute joy compared to the alternatives, but if the desktop gets messed up after only a release or two, we're back to reinstalling just like Windows.

    Read the article

  • Scanline filling of polygons that share edges and vertices

    - by Belgin
    In this picture (a perspective projection of an icosahedron), the scanline (red) intersects that vertex at the top. In an icosahedron each edge belongs to two triangles. From edge a, only one triangle is visible, the other one is in the back. Same for edge d. Also, in order to determine what color the current pixel should be, each polygon has a flag which can either be 'in' or 'out', depending upon where on the scanline we currently are. Flags are flipped according to the intersection of the scanline with the edges. Now, as we go from a to d (because all edges are intersected with the scanline at that vertex), this happens: the triangle behind triangle 1 and triangle 1 itself are set 'in', then 2 is set in and 1 is 'out', then 3 is set 'in', 2 is 'out' and finally 3 is 'out' and the one behind it is set 'in', which is not the desired behavior because we only need the triangles which are facing us to be set 'in', the rest should be 'out'. How do process the edges in the Active Edge List (a list of edges that are currently intersected by the scanline) so the right polys are set 'in'? Also, I should mention that the edges are unique, which means there exists an array of edges in the data structure of the icosahedron which are pointed to by edge pointers in each of the triangles.

    Read the article

  • C++11 Tidbits: access control under SFINAE conditions

    - by Paolo Carlini
    Lately I have been spending quite a bit of time on the SFINAE ("Substitution failure is not an error") features of C++, fixing and tweaking various bits of the GCC implementation. An important missing piece was the implementation of the resolution of DR 1170 which, in a nutshell, mandates that access checking is done as part of the substitution process. Consider: class C { typedef int type; }; template <class T, class = typename T::type> auto f(int) - char; template <class> auto f(...) -> char (&)[2]; static_assert (sizeof(f<C>(0)) == 2, "Ouch"); According to the resolution, the static_assert should not fire, and the snippet should compile successfully. The reason being that the first f overload must be removed from the candidate set because C::type is private to C. On the other hand, before the resolution of DR 1170, the expected behavior was for the first overload to remain in the candidate set, win over the second one, to eventually lead to an access control error (*). GCC mainline (would be 4.8) finally implements the DR, thus benefiting the many modern programming techniques heavily exploiting SFINAE, among which certainly the GNU C++ runtime library itself, which relies on it for the internals of <type_traits> and in several other places. Note that the resolution of the DR is active even in C++98 mode, not just in C++11 mode, because it turned out that the traditional behavior, as implemented in GCC, wasn't fully consistent in all the possible circumstances. (*) In practice, GCC didn't really implement this, the static_assert triggered instead.

    Read the article

  • Agile development challenges

    - by Bob
    With Scrum / user story / agile development, how does one handle scheduling out-of-sync tasks that are part of a user story? We are a small gaming company working with a few remote consultants who do graphics and audio work. Typically, graphics work should be done at least a week (sometimes 2 weeks) in advance of the code so that it's ready for integration. However, since SCRUM is supposed to focus on user stories, how should I split the stories across iteration so that they still follow the user story model? Ideally, a user story should be completed by all the team members in the same iteration, I feel that splitting them in any way violates the core principle of user story driven development. Also, one front end developer can work at 2X pace of backend developers. However, that throws the scheduling out of sync as well because he is either constantly ahead of them or what we have done is to have him work on tasks that not specific to this iteration just to keep busy. Either way, it's the same issue as above, splitting up user story tasks. If someone can recommend an active Google agile development group that discusses these and other issues, that'll be great. Also, if you know of a free alternative to Pivotal Labs, let me know as well. I'm looking now at Agilo.

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >