Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 146/728 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Convert filenames to their checksum before saving to prevent duplicates. Is is a smart thing to do?

    - by Xananax
    TL;DR:what the title says I am developing some sort of image board in PHP. I was thinking of changing each image's filename to it's checksum prior to saving it. This way, I might be able to prevent duplicates. I know this wouldn't work for two images that are the same but differ in size or level of compression or whatnot, but this method would allow for an early check. What bugs me is that I never saw this method implemented anywhere, so I was wondering if there is a catch to it. Maybe it is just more efficient to keep the original filename and store the hash in DB? Maybe the whole method is just not useful and my question is moot? What do you think? On a side note, I don't really get how hashes are calculated so I was wondering, if my first question checks out, if it would be possible to calculate the likeness that two images are similar by comparing hashes (levenshtein or something of the sort).

    Read the article

  • TrueCrypt & upgrading your hard drive?

    - by Danielb
    I currently use TrueCrypt to encrypt the hard drive in a Win7 laptop (everything in a single partition). I am looking to upgrade the hard drive to a model with significantly more storage capacity. I've had a look through the documentation but I couldn't see anything about this particular scenario. I assume I need to do something like the following: Remove the encryption from the existing drive. Clone the existing drive image onto the new hard drive. Physically install the new drive into the laptop. Resize the single partition to use all the space in the new drive. Encrypt all of the new bigger drive with TrueCrypt.

    Read the article

  • DC on Hyper V Host

    - by Saif Khan
    I've read a few similar questions but wasn't clear. I have a small office (13) users and recently purchased a new single server (this is all I have to work with for now). Server specs Memory - 16GB Drives - 6 SCSI (2TB) PROCESSOR - DUAL QUAD I plan on making the Hyper-V host the DC and then 2 VMs, one for a file server and the other an application server. Question - since I am restricted to this single server, would it be a big issue making the Hyper-V host the domain controller? Your input greatly appreciated.

    Read the article

  • Why old (301) links stay on Google when breaking site down to multiple domains

    - by Sampo Sarrala
    Some background: We did have single site and single domain (let's call it mainsite.com) with product information, however things have changed since and product database has grown fast. So we decided to move some major products/manufacturers under their own domains (let's call one of them subsite.com) while still using our main database/codebase. What we've done: Added subsite.com domain for product 1 by Great Products Co. Some new nice looking front pages, info pages, etc. Detail pages that will use information from original db. Redirected product/group links from mainsite.com using 301 redirect. Verified that redirects works as expected. Waited some time for Google reindexing (over 30 days, I've heard it should be more than enough). Results: If I search our moved products from Google then it will found them and list them but with old links to our main page like mainsite.com/group/product1 but it should show link to new site subsite.com/product1. Links from Goole redirects as they should, as said redirects are verified [301]. Main question: Any reasons why Google would not follow 301 redirects and update links so that they will point to our new mfg/product site subsite.com?

    Read the article

  • T-SQL Tuesday : Reflections on the PASS Summit and our community

    - by AaronBertrand
    Last week I attended the PASS Summit in Seattle. I blogged from both keynotes ( Keynote #1 and Keynote #2 ), as well as the WIT Luncheon - which SQL Sentry sponsored. I had a fantastic time at the conference, even though these days I attend far fewer sessions that I used to. As a company, we were overwhelmed by the positive energy in the Expo Hall. I really liked the notebook idea, where board members were assigned notebooks to carry around and take ideas from attendees. I took full advantage when...(read more)

    Read the article

  • tag structured Filesystems

    - by A.Rashad
    I hope this is the correct site, I lose my way between the 4 sister sites :) Let me ask the question this way. all file systems I have seen before are hierarchical, that means a root directory, with some branched directories, and so on until we have files residing in these directories. except for AS/400 file structure, where it has a concept of a Library that serve somehow as a directory but one level only. Why not have directory-less filesystems where files are placed in a single location, but the file identifiers would be referenced by a database of tag/ file relation ships. This way there will be no need for symbolic links, one file may have multiple relations to multiple subjects, not only a single parent directory to contain. I hope the idea is clear.

    Read the article

  • Firefox Master Password (ssh-agent)

    - by BCable
    I use the master password feature of Firefox, and I also use SSH keys to login to a bunch of UNIX machines. For SSH, there is a very useful application called ssh-agent that runs in the background knowing the required information about unlocking the key so you don't have to type the question every single time you want to connect. I open and close Firefox a lot, so I was curious, is there a way to have Firefox run in the background (preferrably doing nothing, but the whole process would be fine I guess as well) so that I don't have to type my master password every single time I open Firefox? Thanks!

    Read the article

  • links for 2011-02-17

    - by Bob Rhubart
    ArchitectACEs - Oracle Wiki Putting a Face on the Architect ACE The Oracle ACE s listed here have identified themselves, or have been identified by fellow ACEs, as software architects. As... (tags: ping.fm) Debra's thoughts on Oracle and User Groups: I did it - I did the Fusion UX Demo Oracle ACE Director Debra Lilley shares her experience in presenting a Fusion Applications demo at RMOUG. (tags: oracle otn oracleace) The Blas from Pas: JRuby Script to Monitor a Oracle WebLogic GridLink Data Source Remotely "In WebLogic 10.3.4 release, a single data source implementation has been introduced to support Oracle RAC cluster. To simplify and consolidate its support for Oracle RAC, WebLogic Server has provided a single data source that is enhanced to support the capabilities of Oracle RAC." (tags: oracle otn weblogic) Show Notes: Bob Hensle on IT Strategies from Oracle (ArchBeat) In Part 1 Bob Hensle talked about the various documents in the IT Strategies from Oracle library. In Part 2 (now available) Bob talks about how SOA and other factors are reflected in those documents. (tags: oracle otn entarch podcast) PODCAST: Examining the state of EA and findings of recent survey | Open Group Blog A transcript of a podcast panel discussion on the findings from a study on the current state and future direction of enterprise architecture from The Open Group Conference, San Diego 2011. (tags: entarch opengroup) A Virtual Dilemma (Antony Reynolds' Blog) SOA author Anthony Reynolds shares a solution. (tags: oracle otn soa) Webcast: Live Online Forum: Oracle Security - February 24, 9:00am PT Speakers: Mary Ann Davidson, Chief Security Officer, Oracle; Tom Kyte, Senior Technical Architect, Oracle; Jeff Margolies, Partner, Security Practice, Accenture; Vipin Samar, VP, Database Security Product Development Oracle; and Nishant Kaushik, Chief Strategist, Identity and Access Management. (tags: oracle security) Obama banks on cloud, consolidation, to hold down IT costs | Computerworld NZ President Obama's fiscal 2012 budget proposal keeps IT spending almost flat compared to fiscal 2010 mostly due to the consolidation of data centers and a shift to cloud computing systems. (tags: ping.fm)

    Read the article

  • Virtual Developer Day: Oracle Fusion Development (July 10, Americas TZ)

    - by oracletechnet
    Help! We love bringing Virtual Developer Days to you, and we can't stop! Yes,  again we're proud and happy to offer you the newest flavor of Virtual Developer Day, this one to debut on July 10 in the Americas timezones: Oracle Fusion Development (register). In this workshop, we'll give you a deep dive into the ever-expanding world of Oracle ADF, including: Oracle ADF vs. Oracle APEX use cases Oracle ADF Mobile Development Oracle ADF + Eclipse Oracle WebCenter and Oracle ADF Development and more.... Looks like paradise for Oracle ADF junkies, or developers who aspire to become one. As is the case with all our Virtual Developer Days, Oracle PMs will be on call via live chat to answer your questions and provide support. Get on board!

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Dual-bootable Virtual Machine

    - by ojrac
    My work computer is a Linux desktop with a Windows 7 virtual machine for Visual Studio and IE testing. I'm very picky, and I don't want to configure two Windows installs... but I can't think of a way to do this without running afoul of Windows activation. I've already set up VirtualBox to run my VM off a physical hard drive, and grub isn't too hard to configure. But it'd be a waste of time without solving the activation problem. Is there any way I can boot into a single install of Windows as a virtual machine and on actual hardware without having to reactivate (until I'm eventually flagged as a pirate) every time I switch between the two? Is there any MS-endorsed way to use a single installed license with two sets of hardware?

    Read the article

  • Standby Problem, PC can not enter Standby mode

    - by Leon95
    I have a problem with the Standby function: My Computer does not enter the Standby mode with Ubuntu 14.04LTS. If I remeber me right it, works with Ubuntu 13.10 but this Version was not long installed on this PC. Now when i press Standby in the menu or on my Keyboard the Display turns black for a few seconds, then some messages appear for a very short moment on the screen. After that, the the log-in screen appear. Two times I was able to enter Standby but the other times it fails. Tecnical Data about my PC: Ubuntu 14.04 with all Updates main storage: 3,8GiBprocessor: Intel® Core™ i3-2330M CPU @ 2.20GHz × 4 graphic: Intel® Sandybridge Mobile graphic board: NVIDA GEFORCE GT 555M CUDA 1GB Dual Boot System with win7 x64Bit Medion P6812 Laptop Here is the message output: Usally I got only a half of the screen filled with messages like that. When I filmed it it was much more. I dont know, maybe this is a Bug.

    Read the article

  • Only recognizes one of multiple partitions on SD card

    - by Jay Ngo
    Hello everybody, I split my sd card into 2 partitions. When i use usb-card-reader to read my sd card, only the one partition shows up on the screen, the other doesn't. I have run the command "sudo fdisk -l" and the result is the same, only one partition is recognized. But i do believe both partitions of my sd card work fine, because i still can boot my single-board computer with that sd card and run some programms, which are inside that unreadable partition. How can i access both partitions of my sd card? Does anyone know how to solve this kind of problem? I really appreciate your help.

    Read the article

  • Let's introduce the Oracle Enterprise Data Quality family!

    - by Sarah Zanchetti
    The Oracle Enterprise Data Quality family of products helps you to achieve maximum value from their business applications by delivering fit-­for-­purpose data. OEDQ is a state-of-the-art collaborative data quality profiling, analysis, parsing, standardization, matching and merging product, designed to help you understand, improve, protect and govern the quality of the information your business uses, all from a single integrated environment. Oracle Enterprise Data Quality products are: Oracle Enterprise Data Quality Profile and Audit Oracle Enterprise Data Quality Parsing and Standardization Oracle Enterprise Data Quality Match and Merge Oracle Enterprise Data Quality Address Verification Server Oracle Enterprise Data Quality Product Data Parsing and Standardization Oracle Enterprise Data Quality Product Data Match and Merge Also, the following are some of the key features of OEDQ: Integrated data profiling, auditing, cleansing and matching Browser-based client access Ability to handle all types of data – for example customer, product, asset, financial, operational Connection to any JDBC-compliant data sources and targets Multi-user project support (role-based access, issue tracking, process annotation, and version control) Services Oriented Architecture (SOA) - support for designing processes that may be exposed to external applications as a service Designed to process large data volumes A single repository to hold data along with gathered statistics and project tracking information, with shared access Intuitive graphical user interface designed to help you solve real-world information quality issues quickly Easy, data-led creation and extension of validation and transformation rules Fully extensible architecture allowing the insertion of any required custom processing  If you need to learn more about EDQ, or get assistance for any kind of issue, the Oracle Technology Network offers a huge range of resources on Oracle software. Discuss technical problems and solutions on the Discussion Forums. Get hands-on step-by-step tutorials with Oracle By Example. Download Sample Code. Get the latest news and information on any Oracle product. You can also get further help and information with Oracle software from: My Oracle Support Oracle Support Services An Information Center is available, where you can find technical information and fast solutions to the most common already solved issues: Information Center: Oracle Enterprise Data Quality [ID 1555073.2]

    Read the article

  • Live from the #summit13 keynote : 2013-10-17

    - by AaronBertrand
    Douglas McDowell (EVP Finance) takes the stage (no kilt), and talks numbers. PASS has an impressive $1MM in reserves as a "rainy day" fund. Last fiscal year they spent $7.6MM on community; 30% of that internationally. Bill Graziano comes on (no kilt) to say goodbye and thanks to the outgoing board members, Douglas McDowell, Rob Farley and Rushabh Mehta. Thomas LaRock comes on. No kilt , but he did tuck his shirt in . He introduces the incoming executive team. The 2014 PASS Business Analytics Conference...(read more)

    Read the article

  • How to extend selection in gnome-terminal?

    - by tomasorti
    In a terminal I can select a single line, double clicking with the Left Mouse Button. With xterm, I can extend that selection clicking with the Right Mouse Button in the place I want to extend it. Then can I paste the whole selection with the Middle Mouse Button or paste it in other application. In gnome-terminal, it seems I can extend the selection clicking with the Left Mouse Button, but holding at the same time the Shift key. Visually, it seems the selection is done, but when clicking with the Middle Mouse button or pasting in other application, I only get the primary single line selection. How can I get the whole selection under gnome-terminal? Is it possible to use selections in gnome-terminal as xterm does? Cheers, Tomas.

    Read the article

  • bash starts replacing the characters on the current line insead of moving over to the next line

    - by Lazer
    I use bash shell $ bash --version GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) Copyright (C) 2005 Free Software Foundation, Inc. $ Sometimes, when typing a command on the prompt that is pretty lengthy and does not fit in the current line, instead of displaying the extra characters on the next line, bash starts again on the current line.. replacing the characters that were there and making a mess. what should happen : |---------------------------------------------| | $ my big long command takes a lot of argumen| | s and does not fit in a single line | | | |---------------------------------------------| what happens instead : |---------------------------------------------| | s and does not fit in a single linef argumen| | | | | |---------------------------------------------| The issue is intemittent If I resize my shell window to really small width, normal behaviour is restored Does anyone have any idea what is happening here? $ echo $TERM xterm $ echo $PS1 \[\e[30m\][\t]\[\e[0m\]\[\e]0;\w\a\]\[\e[30m\][\W]$ $

    Read the article

  • Does an onboard video affect the X windows configuration?

    - by Timothy
    Does the onboard video on the motherboard affect the X windows configuration? My system has onboard and pcie video. The onboard video is a NVIDIA GeForce 7025 GPU, On Board Graphic Max. Memory Share Up to 512MB(Under OS By Turbo Cache). I have a pcie dual head video card installed with two monitors. The video card is a GeForce 8400 GS, with 512mb memory. When installing Ubuntu 12.04, only one monitor worked. When pulling up system settings- Displays it shows a laptop. This is a desktop pc. I did get both monitors to work using nvidia using twinview -- A complicated process! When checking nvidia now it shows the monitors disabled. The Nvidia X server setting does show the GPU and all the information. I was thinking it's seeing the onboard video on the motherboard. Why else would it show laptop?

    Read the article

  • Global Perspective: Oracle AppAdvantage Does its Stage Debut in the UK

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Global Perspective is a monthly series that brings experiences, business needs and real-world use cases from regions across the globe. This month’s feature is a follow-up from last month’s Global Perspective note from a well known ACE Director based in EMEA. My first contribution to this blog was before Oracle Open World and I was quite excited about where this initiative would take me in my understanding of the value of Oracle Fusion Middleware. Rimi Bewtra from the Oracle AppAdvantage team came as promised to the Oracle ACE Director briefings and explained what this initiative was all about and I then asked the directors to take part in the new survey. The story was really well received and then at the SOA advisory board that many of these ACE Directors already take part in there was a further discussion on how this initiative will help customers understand the benefits of adoption. A few days later Rick Beers launched the program at a lunch of invited customer executives which included one from Pella who talked about their projects (a quick recap on that here). I wasn’t able to stay for the whole event but what really interested me was that these executives who understood the technology but where looking for how they could use them to drive their businesses. Lots of ideas were bubbling up in my head about how we can use this in user groups to help our members, and the timing was fantastic as just three weeks later we had UKOUG_Apps13, our flagship Applications conference in the UK. We had independently working with Oracle marketing in the UK on an initiative called Apps Transformation to help our members look beyond just the application they use today. We have had a Fusion community page but felt the options open are now much wider than Fusion Applications, there are acquired applications, social, mobility and of course the underlying technology, Oracle Fusion Middleware. I was really pleased to be allowed to give the Oracle AppAdvantage story as a session in our conference and we are planning a special Apps Transformation event in March where I hope the Oracle AppAdvantage team will take part and we will have the results of the survey to discuss. But, life also came full circle for me. In my first post, I talked about Andrew Sutherland and his original theory that Oracle Fusion Middleware adoption had technical drivers. Well, Andrew was a speaker at our event and he gave a potted, tech-talk free update on Oracle Open World. Andrew talked about the Prevailing Technology Winds, and what is driving this today and he talked about that in the past it was the move from simply automating processes (ERP etc), through the altering of those processes (SOA) and onto consolidation. The next drivers are around the need to predict, both faster and more accurately; how to better exploit the information that we have available. He went on to talk about The Nexus of Forces: Social, Mobile, Cloud and Information – harnessing these forces of change with Oracle technology. Gartner really likes this concept and if you want to know more you can get their paper here. All this has made me think, and I hope it will make you too. Technology can help us drive our businesses better and understanding your needs can be the first step on your journey, which was the theme of our event in the UK. I spoke to a number of the delegates and I hope to share some of their stories in later posts. If you have a story to share, the survey is at: https://www.surveymonkey.com/s/P335DD3 About the Author: Debra Lilley, Fujitsu Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Debra has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Leader and then Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts Editor’s Note: Debra has kindly agreed to share her musings and experience in a monthly column on the Fusion Middleware blog so do stay tuned…

    Read the article

  • Can SAML use saleforce login information to log into another system inside a view?

    - by steve
    I want my sales people, whom use salesforce every day, to be able to view orders in a ecommerce system through a dashboard view in salesforce. The ecom is built and sitting on my web server but the sales reps dont like to log into too many things in one day so they are not using what I built them. I read recently that salesforce can use SAML but it was unclear as to what you can do with it. What I'd like, is to make a new dash board view that will open up the ecom inside of salesforce. The ecom uses a login system but if it is inside of saleforce would SAML automatically log into the ecom?

    Read the article

  • Out-of-the-Box Integration Links Primavera Solutions with PeopleSoft Projects Applications

    - by Sylvie MacKenzie, PMP
    In a move that brings best-in-class enterprise project portfolio management to Oracle’s PeopleSoft enterprise resource planning customers, Oracle announced the integration of Oracle’s PeopleSoft projects applications and Oracle’s Primavera P6 Enterprise Project Portfolio Management. The combination of PeopleSoft financial controls and Primavera portfolio management capabilities brings greater oversight of end-to-end processes to help organizations improve the planning and execution efforts needed to deliver projects on time and within budget. “As an organization with many high-value, project-driven initiatives, we are very pleased to see Oracle’s investment in this important integration,” says Janardhanan Sankar, senior vice president for technology and quality at ITC Infotech India Ltd. Oracle’s PeopleSoft projects applications enable project-centric organizations and departments to establish core operational processes for full project lifecycle management across operations and finance. The integration with Primavera P6 Enterprise Project Portfolio Management means organizations can eliminate costly and difficult-to-maintain proprietary integrations. Organizations can also standardize on the Oracle technologies to Align back-office budgets and costs with project operations to help ensure accurate forecasting of costs, resources, and schedules Provide an accurate single source of truth to financial managers and analysts using Oracle’s PeopleSoft projects applications, and to project managers using Primavera P6 Enterprise Project Portfolio Management  Enhance project collaboration and execution by having all users utilizing common solutions to communicate, plan, and deliver projects “By bringing together Oracle’s PeopleSoft projects applications and Oracle’s Primavera P6 Enterprise Project Portfolio Management, we are able to provide customers with the infrastructure they need to achieve a single source of truth on the projects they are managing,” says Paco Aubrejuan, Oracle’s group vice president and general manager, PeopleSoft. “This real-time visibility drives profitability, increases productivity, and improves operations.” For more information, view the on-demand Webcast, “Bridging Business Processes for Optimal Portfolio Performance,” or read about the new integration.

    Read the article

  • How do I turn on wireless adapter on HP Envy dv6 7200 under Ubuntu (any version)?

    - by Dave B.
    I have a new HP Envy dv6 7200 with dual boot Windows 8 / Ubuntu 12.04. In windows, the F12 key in Windows activates the "airplane mode" switch which enables/disables both on-board (mini PCIe) and USB wireless adapters. In Ubuntu, however, the wireless adapter is turned off by default and cannot be turned back on via the F12 key (or any other combination of F12 and Ctrl, Fn, Shift, etc.). Let me explain the "fixes" I've seen in various forums and explain what did or did not happen. These are listed in no particular order. (Spoiler alert: wireless is still broke). Solution 1? Use HP's "Wireless Assistant" utility to permanently activate the wireless card in Windows, then boot into Ubuntu to happily find it working. Unfortunately, this utility works in Windows 7 but not Windows 8. On the other hand, hardware drivers from HP are only available for Windows 8 for this model. Catch 22 (I could not find a comparable utility for Windows 8). Solution 2? Use a USB wireless adapter to sidestep the on-board device. I purchased such a device from thinkpenguin.com to be sure that it would be Linux-friendly. However, the wireless switch enables / disables all wireless devices including USB. So, there's my $50 donation to the nice folks at thinkpenguin.com, but still no solution. Solution 3? Following the Think Penguin folk's suggestion, modify the mini PCI express adapter following instructions here: http://www.notebookforums.com/t/225429/broken-wireless-hardware-switch-fix Tempting, but I then violate the terms of my warranty mere days after opening the box. This might be a good solution for an older machine that you want to get your geek on with, but not for a new box. Solution 4? rfkill unblock all No effect whatsoever. ubuntu@ubuntu-hp-evny:~$ rfkill unblock all ubuntu@ubuntu-hp-evny:~$ rfkill list all 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes Solution 5? Re-install drivers. Done and done. Ubuntu recognizes the device - perhaps even without re-installing the drivers? - but cannot turn it on. How do I know this? In the Network Manager drop-down menu, the wireless option is blacked out and a message reads something like: "wireless network is disabled by a hardware switch". Solution 6? Identify a physical switch on the laptop and flip it. There is no such switch on this machine. In fact, walking through Best Buy yesterday, I checked and not a single new laptop PC had a physical switch on it. All of the wireless switches are either the F2 or F12 key ... I wonder if askubuntu will not be plagued by this exact issue in the near future? Additional info - lspci ubuntu@ubuntu-hp-evny:~$ lspci 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Ivy Bridge PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.2 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 3 (rev c4) 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) 00:1c.5 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 6 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 RAID bus controller: Intel Corporation 82801 Mobile SATA Controller [RAID mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 0de9 (rev a1) 08:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 5229 (rev 01) 0a:00.0 Network controller: Ralink corp. Device 539b 0b:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) Any suggestions would be much appreciated!

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • How do I structure code and builds for continuous delivery of multiple applications in a small team?

    - by kingdango
    Background: 3-5 developers supporting (and building new) internal applications for a non-software company. We use TFS although I don't think that matters much for my question. I want to be able to develop a deployment pipeline and adopt continuous integration / deployment techniques. Here's what our source tree looks like right now. We use a single TFS Team Project. $/MAIN/src/ $/MAIN/src/ApplicationA/VSSOlution.sln $/MAIN/src/ApplicationA/ApplicationAProject1.csproj $/MAIN/src/ApplicationA/ApplicationAProject2.csproj $/MAIN/src/ApplicationB/... $/MAIN/src/ApplicationC $/MAIN/src/SharedInfrastructureA $/MAIN/src/SharedInfrastructureB My Goal (a pretty typical promotion pipeline) When a code change is made to a given application I want to be able to build that application and auto-deploy that change to a DEV server. I may also need to build dependencies on Shared Infrastructure Components. I often also have some database scripts or changes as well If developer testing passes I want to have an manually triggered but automated deploy of that build on a STAGING server where end-users will review new functionality. Once it's approved by end users I want to a manually triggered auto-deploy to production Question: How can I best adopt continuous deployment techniques in a multi-application environment? A lot of the advice I see is more single-application-specific, how is that best applied to multiple applications? For step 1, do I simply setup a separate Team Build for each application? What's the best approach to accomplishing steps 2 and 3 of promoting latest build to new environments? I've seen this work well with web apps but what about database changes

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >