Search Results

Search found 972 results on 39 pages for 'grant h'.

Page 36/39 | < Previous Page | 32 33 34 35 36 37 38 39  | Next Page >

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • Advanced All In One .NET Framework (should i go for a software factory ?)

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration logging screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production automatic control resizing code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... At this point i have 3 options : Build from scratch on top of clr :( Find functionnalities among several open source framework and use them as a stack for infrastucture Find a "software factory" I know its lot but i really would like to use existing code to build upon so i can focus on business rules. What open source tools would u use to achieve these ?

    Read the article

  • UnauthorizedAccessException when running desktop application from shared folder

    - by Atara
    I created a desktop application using VS 2008. When I run it locally, all works well. I shared my output folder (WITHOUT allowing network users to change my files) and ran my exe from another Vista computer on our intranet. When running the shared exe, I receive "System.UnauthorizedAccessException" when trying to read a file. How can I give permission to allow reading the file? Should I change the code? Should I grant permission to the application\folder on the Vista computer? how? Notes: I do not use ClickOnce. the application should be distributed using xcopy. My application target framework is ".Net Framework 2.0" On the Vista computer, "controlPanel | UninstallOrChangePrograms" it says it has "Microsoft .Net Framework 3.5 SP1" I also tried to map the folder drive, but got the same errors, only now the fileName is "T:\my.ocx" ' ---------------------------------------------------------------------- ' my code: Dim src As String = mcGlobals.cmcFiles.mcGetFileNameOcx() Dim ioStream As New System.IO.FileStream(src, IO.FileMode.Open) ' ---------------------------------------------------------------------- Public Shared Function mcGetFileNameOcx() As String ' ---------------------------------------------------------------------- Dim dirName As String = Application.StartupPath & "\" Dim sFiles() As String = System.IO.Directory.GetFiles(dirName, "*.ocx") Dim i As Integer For i = 0 To UBound(sFiles) Debug.WriteLine(System.IO.Path.GetFullPath(sFiles(i))) ' if found any - return the first: Return System.IO.Path.GetFullPath(sFiles(i)) Next Return "" End Function ' ---------------------------------------------------------------------- ' The Exception I receive: System.UnauthorizedAccessException: Access to the path '\\computerName\sharedFolderName\my.ocx' is denied. at System.IO._Error(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(...) at System.IO.FileStream..ctor(...) at System.IO.FileStream..ctor(String path, FileMode mode) ' ----------------------------------------------------------------------

    Read the article

  • How do I get Firefox to launch Visio when I click on a linked .vsd file?

    - by Dean
    On our intranet site, we have various MS Office documents linked. When I click on a Word, Excel or PowerPoint file, Firefox gives me the option to Open, Save or Cancel. When I click on Open, the appropriate app is launched and the file is loaded. This is perfect. But for some reason, when I click on a linked Visio file, I only get the option to Save, which is inconvenient. I know that Firefox knows the linked file is a Visio file because it tells me so in the dialog box: "You have chosen to open example.vsd which is a: Microsoft Visio Drawing". How can I make Firefox launch Visio when I click on a linked Visio file? Update: Firefox is not launching Visio when I click on a linked Visio file because the web server does not identify the content-type correctly. It identifies the Visio file as application/octet-stream instead of application/x-visio. (Thanks Grant Wagner.) This explains why it doesn't work. And in my case, I may be able to get the Apache config file changed, but this is not certain. However, I would love to know if there is a way to configure Firefox itself to launch Visio based on some other criteria, like file name extension. This way I can open Visio files even if I don't have access to the Apache configuration.

    Read the article

  • Redeploying an ASP.NET site in IIS7 without files in use interfering

    - by fyjham
    Hey, We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot. The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now. I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are: Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good. Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy. Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?

    Read the article

  • Align child element to bottom with CSS

    - by alex
    I have a form input, and the label spans multiple lines, and I want the corresponding checkbox to appear at the bottom (last line of the label element). Here is what I was playing with CSS .standard-form { width: 500px; border: 1px solid red; } .standard-form .input-row { overflow: hidden; margin-bottom: 0.8em; } .standard-form label { width: 25%; float: left; } .standard-form .input-container { width: 70%; float: right; } .standard-form .checkbox .input-container { display: table-cell; height: 100%; vertical-align: text-bottom; } HTML <form class="standard-form"> <div class="input-row checkbox" id="permission"> <label for="input-permission"> Do I hereby grant you permission to do whatever tasks are neccessary to achieve an ideal outcome? </label> <div class="input-container"> <input type="checkbox" id="input-permission" name="permission" value="true" /> </div> </div> </form> It is also online at JSbin. Is there any way to do this? I notice that div.input-container isn't expanding, which is the old multi column problem with CSS. I thought I could get this going with display: table-cell and vertical-align: bottom but I haven't been able to do it yet. I don't mind that IE6/7 won't render it correctly.

    Read the article

  • IIS7.5 and MVC 2 : Implementing HTTP(S) security

    - by Program.X
    This is my first ASP.NET MVC application, and my first on an IIS 7.x installation whereby I have to do anything over and above the standard. I need to enforce Windows authentication on the /Index and /feeds/xxx.svc pages/services. In ASP.NET Web Forms, I would apply the Windows permissions on the files and remove Anonymous authentication in IIS 6. This needs to work over HTTP/S, but don't worry about that, that's in hand. What happens in MVC/IIS 7? I have tried modifying the permissions on the /Index.aspx view, which seems to block access. It asks me for a username/password, but does not grant access when I enter a valid username/password. Pressing Escape gives me an exception "*Access to the path 'E:\dev\xxx\xxx.ConsultantRegistration.Web.Admin\Views\ConsultantRegistration\index.aspx' is denied. *", which does get sent as a 401. So although the username/password does exist on the Index.aspx view, I can't use those credentials to access said view. I have in my web.config: What am I missing?

    Read the article

  • Google Drive API invalid_grant after removing access

    - by Sparafusile
    I have been writing a desktop application that uses the Google Drive API v2. I have the following code: var credential = GoogleWebAuthorizationBroker.AuthorizeAsync ( new ClientSecrets { ClientId = ClientID, ClientSecret = ClientSecret }, new[] { DriveService.Scope.Drive }, "user", CancellationToken.None ) .Result; this.Service = new DriveService( new BaseClientService.Initializer() { HttpClientInitializer = credential, ApplicationName = "My Test App", } ); var request = this.Service.Files.List(); request.Q = "title = 'foo' and trashed = false"; var result = request.Execute(); The first time I ran this code it opened a browser and asked me to grant permissions to the App, which I did. Everything worked successfully until I realized I was using the wrong Google account. At that point I logged into the wrong Google account and revoked access to my App. Now, whenever I run the same code it throws an exception: Error:"invalid_grant", Description:"", Uri:"" When I examine the service and request objects, it looks like the oauth_token isn't getting created any more. I know what I did to mess things up, but I can't figure out how to correct it so I can use a different Google account for testing. What do I need to do?

    Read the article

  • Delay android PlusClient login request

    - by jamesakadamingo
    I am trying to implement the new PlayServices API within my android application to use a +1 button. I have it working nicely, all the expected functionality is there. However it has one rather annoying feature (seriously google!). When you instance the plusclient: mPlusClient = new PlusClient(this, this, this, Scopes.PLUS_PROFILE); Your user is presented with a "Pick your account" dialog (if they have more than one account) followed by a "grant access" dialog. I understand the need for these steps, however they really get in the way of the user experience! My initial activity (post splash screen) now has the +1 button, which means that you have to instance the PlusClient. Doing so in the onCreate() method (as google suggests) means that my user is given the "authorisation" screen before they even know what is going on! What I want to do it delay that untill they actually click the +1 button. That way they will know why they are being asked to authorise access to their account! Any ideas? I have tried using an onClick listener on the +1 button to instance but it didn't work.

    Read the article

  • call FB.login() after FB.init() automatically

    - by Tobi Projectx
    i`m developing an app for Facebook. My Code: function init() { window.fbAsyncInit = function() { var appID = 'xxxxxxxxxxxxxxxxxxxxxxxxxx'; FB.init({ appId: appID, status: true, cookie: true, xfbml: true}); login(); }; (function() { var e = document.createElement("script"); e.async = true; e.src = "https://connect.facebook.net/en_US/all.js?xfbml=1"; document.getElementById("fb-root").appendChild(e); }()); }; function login() { FB.login(function(response) { if (response.session) { if (response.perms) { // user is logged in and granted some permissions. // perms is a comma separated list of granted permissions } else { // user is logged in, but did not grant any permissions } } else { // user is not logged in } }, {perms:'read_stream,publish_stream,offline_access'}); }; I want to call the "init" function and after "init" should call the "login" function (open up the Facebook Login Window) automatically. But i always get "b is null" FB.provide('',{ui:function(f,b){if(!f....onent(FB.UIServer._resultToken));}}); Error in Firebug. Can anybody help me? Does anybody have the same problem? Thanks

    Read the article

  • What considerations should be made when creating a reporting framework for a business?

    - by Andrew Dunaway
    It's a pretty classic problem. The company I work for has numerous business reports that are used to track sales, data feeds, and various other metrics. Of course this also means that there is a conglomerate of disparate frameworks, ASP.net pages, and areas where these reports can be found. There have been some attempts at consolidating these into a single entity, but nothing has stuck yet. Since this is a common problem, and I am sure solved innumerable times, I wanted to see what others have done. For the most part these can be boiled down to the following pieces: A SQL query against our database to gather data A presentation of data, generally in a data grid Filtering that can vary based on data types and the business needs Some way to organize the reports, a single drop down gets long and unmanageable quickly A method to download data to alter further, perhaps a csv file My first thought was to create a framework in Silverlight with Linq to Sql. Mainly just because I like it and want to play with it which probably is not the best reason. I also thought the controls grant a lot of functionality like sorting, dragging columns, etc. I was also curious about the printing in Silverlight 4. Which brings me around to my original question, what is the best way to do this? Is there a package out there I can just buy that will do it for me? The Silverlight approach seems pretty easy, after it's setup and templated, but maybe it's a bad idea and I can learn from someone else?

    Read the article

  • WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found

    - by John Haigh
    WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found After finding these steps online from http://dattard.blogspot.com/2008/11/active-directory-forms-based.html in order to setup Active Directory Forms Based Authentication I was all set to complete this task, except for one problem. These steps are missing one very important vital step in order for FBA to work with Active Directory. A supplement to step 3 before granting access in step 5 through the people picker. You need to specify the Active Directory Provider Name to the people picker, otherwise you will not be able specify users through the Policy for Web Application. <PeoplePickerWildcards>       <clear />          <add key="ADMembershipProvider" value="%" />     </PeoplePickerWildcards> Recently we needed to use Forms Based Authentication with Active Directory from an Extranet. This is how we got it to work. 1. Extend the Web Application Instead of tweaking the internal web app, Extend the web application you want to expose to the Extranet, giving it the required host headers etc. 2. Configure SharePoint Central Admin to use FBA for the "new" Web Applications Login to SharePoint Central Admin Go to Application Management / Application Security / Authentication Providers and Change the Web Application to the one which needs to be configured for Forms Based Authentication Click zone / default, change authentication type to forms and enter ActiveDirectoryMemebershipProvider under membership provider name ( for example , "ADMembershipProvider") and save this change 3. Update the web.config of SharePoint Central admin site under configuration node <connectionStrings> <add name="ADConnectionString" connectionString="LDAP://DynamicsAX.local/CN=Users,DC=DynamicsAX,DC=local /> </connectionStrings> under system.web node <membership defaultProvider="ADMembershipProvider"> <providers> <add name="ADMembershipProvider" type="System.Web.Security.ActiveDirectoryMembershipProvider,System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ADConnectionString" connectionUsername="xxx" connectionPassword="yyy" enableSearchMethods="true" attributeMapUsername="sAMAccountName"/> </providers> </membership> 4.Update the web.config of SharePoint Web application Repeat step 3 for the web.config of the SharePoint webapplication to be configured for Forms Based Authentication Change the authentication in web.config to <authentication mode="Forms"> <forms loginUrl="/_layouts/login.aspx"></forms> </authentication> 5. Grant Access on the extended Web Application Your extranet web application is now configured to use FBA. However, until users, who will be accessing the site via FBA, are given permissions for the site, it will be inaccessible to them. To get started, open your browser and navigate to your farm’s Central Administration site. Click on Application Management and then click on Policy for Web Application. Make sure that you are working on the extranet web application. Do the following steps: Click on Add Users. In the Zones drop down, select the appropriate Extranet zone. IMPORTANT: If you select the incorrect zone, you may not be able to resolve user names. Hence, the zone you select must match the zone of the web application that is configured to use FBA. Click the Next button. In the Users edit box, type the name of the FBA user whom you wish to have full control for the site. Click the Resolve link next to the Users edit box. If the web application's FBA information has been configured correctly, the name will resolve and become underlined. Check the Full Control checkbox. Click the Finish button.

    Read the article

  • Write TSQL, win a Kindle.

    - by Fatherjack
    So recently Red Gate launched sqlmonitormetrics.red-gate.com and showed the world how to embed your own scripts harmoniously in a third party tool to get the details that you want about your SQL Server performance. The site has a way to submit your own metrics and take a copy of the ones that other people have submitted to build a library of code to keep track of key metrics of your servers performance. There have been several submissions already but they have now launched a competition to provide an incentive for you to get creative and show us what you can do with a bit of TSQL and the SQL Monitor framework*. What’s it worth? Well, if you are one of the 3 winners then you get to choose either a Kindle Fire or $199. How do you win? Simply write the T-SQL for a SQL Monitor custom metric and the relevant description and introduction for it and submit it via  sqlmonitormetrics.red-gate.com before 14th Sept 2012 and then sit back and wait while the judges review your code and your aims in writing the metric. Who are the judges and how will they judge the metrics? There are two judges for this competition, Steve Jones (Microsoft SQL Server MVP, co-founder of SQLServerCentral.com, author, blogger etc) and Jonathan Allen (um, yeah, Steve has done all the good stuff, I’m here by good fortune). We will be looking to rate the metrics on each of 3 criteria: how the metric can help with performance tuning SQL Server. how having the metric running enables DBA’s to meet best practice. how interesting /original the idea for the metric is. Our combined decision will be final etc etc **  What happens to my metric? Any metrics submitted to the competition will be automatically entered into the site library and become available for sharing once the competition is over. You’ll get full credit for metrics you submit regardless of the competition results. You can enter as many metrics as you like. How long does it take? Honestly? Once you have the T-SQL sorted then so long as you can type your name and your email address you are done : http://sqlmonitormetrics.red-gate.com/share-a-metric/ What can I monitor? If you really really want a Kindle or $199 (and let’s face it, who doesn’t? ) and are momentarily stuck for inspiration, take a look at these example custom metrics that have been written by Stuart Ainsworth, Fabiano Amorim, TJay Belt, Louis Davidson, Grant Fritchey, Brad McGehee and me  to start the library off. There are some great pieces of TSQL in those metrics gathering important stats about how SQL Server is performing.   * – framework may not be the best word here but I was under pressure and couldnt think of a better one. If you prefer try ‘engine’, or ‘application’? I don’t know, pick something that makes sense to you. ** – for the full (legal) version of the rules check the details on sqlmonitormetrics.red-gate.com or send us an email if you want any point clarified. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Deploying Data Mining Models using Model Export and Import

    - by [email protected]
    In this post, we'll take a look at how Oracle Data Mining facilitates model deployment. After building and testing models, a next step is often putting your data mining model into a production system -- referred to as model deployment. The ability to move data mining model(s) easily into a production system can greatly speed model deployment, and reduce the overall cost. Since Oracle Data Mining provides models as first class database objects, models can be manipulated using familiar database techniques and technology. For example, one or more models can be exported to a flat file, similar to a database table dump file (.dmp). This file can be moved to a different instance of Oracle Database EE, and then imported. All methods for exporting and importing models are based on Oracle Data Pump technology and found in the DBMS_DATA_MINING package. Before performing the actual export or import, a directory object must be created. A directory object is a logical name in the database for a physical directory on the host computer. Read/write access to a directory object is necessary to access the host computer file system from within Oracle Database. For our example, we'll work in the DMUSER schema. First, DMUSER requires the privilege to create any directory. This is often granted through the sysdba account. grant create any directory to dmuser; Now, DMUSER can create the directory object specifying the path where the exported model file (.dmp) should be placed. In this case, on a linux machine, we have the directory /scratch/oracle. CREATE OR REPLACE DIRECTORY dmdir AS '/scratch/oracle'; If you aren't sure of the exact name of the model or models to export, you can find the list of models using the following query: select model_name from user_mining_models; There are several options when exporting models. We can export a single model, multiple models, or all models in a schema using the following procedure calls: BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODEL.dmp','dmdir','name =''MY_DT_MODEL'''); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODELS.dmp','dmdir',              'name IN (''MY_DT_MODEL'',''MY_KM_MODEL'')'); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('ALL_DMUSER_MODELS.dmp','dmdir'); END; A .dmp file can be imported into another schema or database using the following procedure call, for example: BEGIN   DBMS_DATA_MINING.IMPORT_MODEL('MY_MODELS.dmp', 'dmdir'); END; As with models from any data mining tool, when moving a model from one environment to another, care needs to be taken to ensure the transformations that prepare the data for model building are matched (with appropriate parameters and statistics) in the system where the model is deployed. Oracle Data Mining provides automatic data preparation (ADP) and embedded data preparation (EDP) to reduce, or possibly eliminate, the need to explicitly transport transformations with the model. In the case of ADP, ODM automatically prepares the data and includes the necessary transformations in the model itself. In the case of EDP, users can associate their own transformations with attributes of a model. These transformations are automatically applied when applying the model to data, i.e., scoring. Exporting and importing a model with ADP or EDP results in these transformations being immediately available with the model in the production system.

    Read the article

  • Tablet design guide, Endeca patterns now available

    - by JuergenKress
    UX Direct, an Oracle program that offers consultants, partners, and customers the same scientifically proven and reusable user experience best practices that Oracle uses to build Oracle Applications, recently added links to a new design guide for creating tablet-based solutions for enterprise applications, and to the recently published Endeca User Interface Design Pattern Library. The tablet design guide is available from the UX Direct Home page. Tap the button under “Latest patterns & tools” for “Oracle Applications UX Tablet Guide.” It provides basic help for designers, developers, and project managers trying to approach tablet design and testing from an enterprise point of view. To hear what developers are saying about it, follow the links from this post on the User Experience Assistance blog. The newly released Endeca User Interface Design Pattern Library is also available from the UX Direct Home page and from a post on the User Experience Assistance blog. It describes principled ways to solve common user interface (UI) design problems related to search, faceted navigation, and discovery. The link between Simplified UI and Oracle UX strategy, plus content you can share on the cloud, ADf, tailoring, and more Simplified User Interface in Oracle Fusion Applications Fronts Oracle Cloud Offerings This new article on Simplified UI has just been posted on Usable Apps. Learn about the three themes - simplicity, mobility, and extensibility – that Simplified UI embodies. These same principles are guiding the development of the next generation of the Oracle user experience. Oracle's Applications User Experience Strategy: One Cloud User Experience, with Optimized UIs Where and How You Want This podcast from Misha Vaughan, Director, User Experience, is now available on the Oracle University Knowledge Center. It is available for partners and Oracle employees at this iLearning Link. Oracle Partner Builds User Experience That Hits Right Note for New Employees This new article on the Usable Apps website explores the experience of consultants at IntraSee as they implement a PeopleSoft onboarding process for Invesco, a global asset management company. The Feng Shui of Fusion This article in Oracle Scene is from Grant Ronald, Director of Product Management, on the Tools of Fusion: Oracle JDeveloper and Oracle ADF. Hands-On Workshop with Fusion Applications and ADF UX Desktop Design Patterns This post on the Voice of User Experience, or VoX, blog from Misha Vaughan describes a new kind of workshop for partners and a handful of internal Oracle sales folks on extending Oracle Fusion Applications and building custom applications with Application Development Framework (ADF) while maintaining the Oracle user experience. To learn more about the content that was delivered during this three-day workshop, visit the Usable Apps blog. Recent posts from a new blog series take a look at several of the topics discussed during the workshop. Applications User Experience Fundamentals Visual Design for any Enterprise User Interface / Art School in a Box Wireframing / Blueprinting Usable Applications Concepts. Tailoring videos This blog post from Richard Bingham, Applications Architect, on the Fusion Applications Developer Relations blog provides links to several videos that show many customization and development tasks using the Oracle Fusion Applications platform. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: UX,Architecture,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Partner outreach on the Oracle Fusion Applications user experience begins

    - by mvaughan
    by Misha Vaughan, Architect, Applications User Experience I have been asked the question repeatedly since about December of last year: “What is the Applications User Experience group doing about partner outreach?”  My answer, at the time, was: “We are thinking about it.”  My colleagues and I were really thinking about the content or tools that the Applications UX group should be developing. What would be valuable to our partners? What will actually help grow their applications business, and fits within the applications user experience charter?In the video above, you’ll hear Jeremy Ashley, vice president of the Applications User Experience team, talk about two fundamental initiatives that our group is working on now that speaks straight to partners.  Special thanks to Joel Borellis, Kelley Greenly, and Steve Hoodmaker for helping to make this video happen so flawlessly. Steve was responsible for pulling together a day of Oracle Fusion Applications-oriented content, including David Bowin, Director, Fusion Applications Strategy, on some of the basic benefits of Oracle Fusion Applications.  Joel Borellis, Group Vice President, Partner Enablement, and David Bowin in the Oracle Studios.Nigel King, Vice President Applications Functional Architecture, was also on the list, talking about co-existence opportunities with Oracle Fusion Applications.Me and Nigel King, just before his interview with Joel. Fusion Applications User Experience 101: Basic education  Oracle has invested an enormous amount of intellectual and developmental effort in the Oracle Fusion Applications user experience. Find out more about that at the Oracle Partner Network Fusion Learning Center (Oracle ID required). What you’ll learn will help you uncover how, exactly, Oracle made Fusion General Ledger “sexy,” and that’s a direct quote from Oracle Ace Director Debra Lilley, of Fujitsu. In addition, select Applications User Experience staff members, as well as our own Fusion User Experience Advocates,  can provide a briefing to our partners on Oracle’s investment in the Oracle Fusion Applications user experience. Looking forward: Taking the best of the Fusion Applications UX to your customersBeyond a basic orientation to one of the key differentiators for Oracle Fusion Applications, we are also working on partner-oriented training.A question we are often getting right now is: “How do I help customers build applications that look like Fusion?” We also hear: “How do I help customers build applications that take advantage of the next-generation design work done in Fusion?”Our answer to this is training and a tool – our user experience design patterns – these are a set of user experience best-practices. Design patterns are re-usable, usability-tested, user experience components that make creating Fusion Applications-like experiences straightforward.  It means partners can leverage Oracle’s investment, but also gain an advantage by not wasting time solving a problem we’ve already solved. Their developers can focus on helping customers tackle the harder development challenges. Ultan O’Broin, an Apps UX team member,  and I are working with Kevin Li and Chris Venezia of the Oracle Platform Technology Services team, as well as Grant Ronald in Oracle ADF, to bring you some of the best “how-to” UX training, customized for your local area. Our first workshop will be in EMEA. Stay tuned for an assessment and feedback from the event.

    Read the article

  • ADF Enterprise Application Development - Made Simple (Book Review)

    - by Frank Nimphius
      Sten E. Vesterli wrote the "Oracle ADF Enterprise Application Development – Made Simple" book published by Packt Publishing in 2011 http://www.packtpub.com/oracle-adf-enterprise-application-development/book A common question on OTN, but also when talking to clients or customers is about where and how to start your ADF application development. Especially when the current programming background is not in Java, but 4 GL or PLSQL, developers often look for answers to the following questions: · How long does it take to learn Oracle ADF ? · How long does it take to replace a Forms application with ADF ? · How many developers do I need? · Do I need to know Java to use ADF and if yes, how good do I need to know this? · How do I structure my programming files, organizing them in JDeveloper work spaces, projects and libraries? · What is best practices for naming Java packages and how to void naming conflicts in ADF in general? · How many Application Modules do I need or should I create? · How to test applications? Sten Vesterli answers all of the above questions and more in his book http://www.packtpub.com/oracle-adf-enterprise-application-development/book , which makes it great value add to the 3 existing Oracle ADF books. In order of complexity (which also is the order in which reading the available Oracle ADF books makes sense), in my opinion, Sten's book should come second – though it also is useful to those that are already more advanced with Oracle ADF. So if you are absolutely new to Oracle ADF, then the order of books to read to get you up on an expert level should be: 1. Grant Ronald; "Quick Start Guide to Oracle Fusion Development: Oracle JDeveloper and Oracle ADF" (McGraw Hill 2010) 2. Sten Vesterli; "Oracle ADF Enterprise Application Development – Made Simple" (Packt Publishing 2011) 3. Duncan Mills, Peter Koletzke; " Oracle JDeveloper 11g Handbook: A Guide to Fusion Web Development" (McGraw Hill 2009) 4. Frank Nimphius, Lynn Munsinger; " Oracle Fusion Developer Guide: Building Rich Internet Applications with Oracle ADF Business Components and Oracle ADF Faces" (McGraw Hill 2010) If you are not new to Oracle ADF and Orace JDeveloper, then buy Sten Vesterli's book anyway. It is worth it and you want to have it on your book shelf. See below the table of content to get a better idea of what this book covers: · Chapter 1: The ADF Proof of Concept · Chapter 2: Estimating the Effort · Chapter 3: Getting Organized · Chapter 4: Productive Teamwork · Chapter 5: Prepare to Build · Chapter 6: Building the Enterprise Application · Chapter 7: Testing your Application · Chapter 8: Look and Feel · Chapter 9: Customizing the Functionality · Chapter 10: Securing your ADF Application · Chapter 11: Package and Deliver · Appendix: Internationalization The book is written with a lot of good humor, which makes the read very enjoyable (from a geek's perspective, of course). My favorite quote – just in case you are interested - is from page 97, when Sten talks about getting organized: " Stop sending e-mails to your team. Just stop it. E-mail is so last century.…" So true, so true! This quote's runner up is the "boss key" on page 128 where Sten talks about productivity and how Oracle Team Productivity Center (TPC) can help you with this. Quotes like these stick to your brains and make sure you never forget. Go for it!

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • AWS .NET SDK v2: setting up queues and topics

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/13/aws-.net-sdk-v2-setting-up-queues-and-topics.aspxFollowing on from my last post, reading from SQS queues with the new SDK is easy stuff, but linking a Simple Notification Service topic to an SQS queue is a bit more involved. The AWS model for topics and subscriptions is a bit more advanced than in Azure Service Bus. SNS lets you have subscribers on multiple different channels, so you can send a message which gets relayed to email address, mobile apps and SQS queues all in one go. As the topic owner, when you request a subscription on any channel, the owner needs to confirm they’re happy for you to send them messages. With email subscriptions, the user gets a confirmation request from Amazon which they need to reply to before they start getting messages. With SQS, you need to grant the topic permission to write to the queue. If you own both the topic and the queue, you can do it all in code with the .NET SDK. Let’s say you want to create a new topic, a new queue as a topic subscriber, and link the two together. Creating the topic is easy with the SNS client (which has an expanded name, AmazonSimpleNotificationServiceClient, compare to the SQS class which is just called QueueClient): var request = new CreateTopicRequest(); request.Name = TopicName; var response = _snsClient.CreateTopic(request); TopicArn = response.TopicArn; In the response from AWS (which I’m assuming is successful), you get an ARN – Amazon Resource Name – which is the unique identifier for the topic. We create the queue using the same code from my last post, AWS .NET SDK v2: the message-pump pattern, and then we need to subscribe the queue to the topic. The topic creates the subscription request: var response = _snsClient.Subscribe(new SubscribeRequest { TopicArn = TopicArn, Protocol = "sqs", Endpoint = _queueClient.QueueArn }); That response will give you an ARN for the subscription, which you’ll need if you want to set attributes like RawMessageDelivery. Then the SQS client needs to confirm the subscription by allowing the topic to send messages to it. The SDK doesn’t give you a nice mechanism for doing that, so I’ve extended my AWS wrapper with a method that encapsulates it: internal void AllowSnsToSendMessages(TopicClient topicClient) { var policy = Policies.AllowSendFormat.Replace("%QueueArn%", QueueArn).Replace("%TopicArn%", topicClient.TopicArn); var request = new SetQueueAttributesRequest(); request.Attributes.Add("Policy", policy); request.QueueUrl = QueueUrl; var response = _sqsClient.SetQueueAttributes(request); } That builds up a policy statement, which gets added to the queue as an attribute, and specifies that the topic is allowed to send messages to the queue. The statement itself is a JSON block which contains the ARN of the queue, the ARN of the topic, and an Allow effect for the sqs:SendMessage action: public const string AllowSendFormat= @"{ ""Statement"": [ { ""Sid"": ""MySQSPolicy001"", ""Effect"": ""Allow"", ""Principal"": { ""AWS"": ""*"" }, ""Action"": ""sqs:SendMessage"", ""Resource"": ""%QueueArn%"", ""Condition"": { ""ArnEquals"": { ""aws:SourceArn"": ""%TopicArn%"" } } } ] }"; There’s a new gist with an updated QueueClient and a new TopicClient here: Wrappers for the SQS and SNS clients in the AWS SDK for .NET v2. Both clients have an Ensure() method which creates the resource, so if you want to create a topic and a subscription you can use:  var topicClient = new TopicClient(“BigNews”, “ImListening”); And the topic client has a Subscribe() method, which calls into the message pump on the queue client: topicClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. topicClient.Publish(message); So you can isolate all the fiddly bits and use SQS and SNS with a similar interface to the Azure SDK.

    Read the article

  • Top 5 Reasons to Invest in Enterprise 2.0 Technologies

    - by kellsey.ruppel(at)oracle.com
    In 2010, Oracle's portal, content management, and collaboration solutions evolved rapidly, supported by increasingly deep integrations across Oracle Fusion Middleware and the entire Oracle stack. In light of these developments, we asked Vince Casarez, vice president of Enterprise 2.0 product management, for his top five reasons to invest in Enterprise 2.0 (E2.0) technologies--including real-world examples of businesses already realizing the benefits of next-generation E2.0 technologies. 1. Provide a modern user experience As E2.0 technologies gain widespread adoption, customers and employees expect intuitive Web experiences that are both interactive and community-based. By partnering with Oracle, Alcatel-Lucent Enterprise Group is already making that happen. With 76,000 employees and operations in more than 100 countries, the company wanted a streamlined, personalized user experience with more relevant content in fewer clicks. Working with Oracle, they created a global support portal that supports personalization and integration with Oracle Business Intelligence Enterprise Edition and Oracle E-Business Suite--and drives collaboration with tools such as wikis, blogs, and forums. Learn more about Alcatel-Lucent Enterprise Group's Global Support Portal in this Webcast. 2. Improve productivity and collaboration As E2.0 technologies mature, Oracle anticipates companies moving beyond the idea of simply creating yet another Facebook-like destination for its employees, and instead shaping work environments around specific business tasks. After rapid growth--both organic and through acquisition--construction and infrastructure services leader Balfour Beatty found itself with multiple homegrown intranet sites with very minimal content-sharing capabilities. Today, thanks to Oracle WebCenter Suite, Oracle WebCenter Spaces, Oracle WebCenter Services, and Oracle Universal Content Management, Balfour Beatty is benefiting from collaborative workspaces, a central place to use and work with documents, and unified search across content. 3. Leverage business processes and applications Modern portals are now able to integrate users, content, and business processes in unprecedented ways. To take advantage of these new possibilities, leading dairy provider Land O'Lakes has implemented a fully integrated ERP solution together with Oracle's ECM platform. As a result, Land O'Lakes has been able to achieve better information management and compliance, increased adoption rates for enterprise tools, and increased business process efficiency thanks to more effective information sharing and collaboration. 4. Enhance customer and supplier relationships Companies have begun to move beyond the idea that E2.0 simply means enabling customer reviews or embedding chat functionality. They are taking E2.0 to the next level and providing interactive experiences for their customers. For example, to enhance customer and supplier relationships, Wind River, a global leader in device software optimization, successfully partnered with Oracle to: Integrate ERP and ECM content to provide customers the latest and most relevant support information for products they own Enable customers to personalize their support experience and receive updates regarding patches, application notes, and other relevant content Enable discussions, wikis, and blogs for more efficient collaboration 5. Increase business visibility and responsiveness By strategically embedding collaboration and communication tools into specific business contexts, companies significantly increase visibility into changing business conditions--and can respond much more agilely. Texas A&M University System--one of the largest systems of higher education in the U.S.--partnered with Oracle to create a unified repository that would enable the retrieval of research and grant data from disparate systems via an Enterprise 2.0 user interface. By enabling researchers to customize their own portals with easy-to-use tools, they have also been able to significantly reduce their reliance on the IT department. Learn how other Oracle customers are leveraging Enterprise 2.0 technologies.

    Read the article

  • Making it GREAT! Oracle Partners Building Apps Workshop with UX and ADF in UK

    - by ultan o'broin
    Yes, making is what it's all about. This time, Oracle Partners in the UK were making great looking usable apps with the Oracle Applications Development Framework (ADF) and user experience (UX) toolkit. And what an energy-packed and productive event at the Oracle UK, Thames Valley Park, location it was. Partners learned the fundamentals of enterprise applications UX, why it's important, all about visual design, how to wireframe designs, and then how to build their already-proven designs in ADF. There was a whole day on mobile apps, learning about mobile design principles, free mobile UX and ADF resources from Oracle, and then trying it out. The workshop wrapped up with the latest Release 7 simplified UIs, Mobilytics, and other innovations from Oracle, and a live demo of a very neat ADF Mobile Android app built by an Oracle contractor. And, what a fun two days both Grant Ronald of ADF and myself had in running the workshop with such a great audience, too! I particularly enjoyed the wireframing and visual design sessions interaction; and seeing some outstanding work done by partners. Of note from the UK workshop were innovative design features not seen before and made me all the happier that developers were bringing their own ideas from the consumer IT world of mobility, simplicity, and social to the world of work apps in a smart way within an enterprise methodology too.  Partner wireframe exercise. Applying mobile design principles and UX design patterns means you've already productively making great usable apps! Next, over to Oracle ADF Mobile with it! One simple example from the design of a mobile field service app was that participants immediately saw how the UX and device functionality of the super UK-based app Hailo app could influence their designs (the London cabbie influence maybe?), as well as how we all use maps, cameras, barcode scanners and microphones on our phones could be used in work. And, of course, ADF Mobile has the device integration solutions there too! I wonder will U.S. workshops in Silicon Valley see an Uber UX influence (LOL)! That we also had partners experienced with Oracle Forms who could now offer a roadmap from Forms to Simplified UI and Mobile using ADF, and do it through through the cloud, really made this particular workshop go "ZING!" for me. Many thanks to the Oracle PartnerNetwork (OPN) team for organizing this event with us, and to the representatives of the Oracle Partners that showed and participated so well. That's what I love out this outreach. It's a two-way, solid value-add for all. Interested? Why would partners and developers with ADF skills sign up for this workshop? Here's why: Learn to use the Oracle Applications User Experience design patterns as the usability building blocks for applications development in Oracle Application Development Framework. The workshop enables attendees to build modern and visually compelling desktop and mobile applications that look and behave like Oracle Cloud Applications, and that can co-exist with partner integrations, new, or existing applications deployments. Partners learn to offer customers and clients more than just coded functionality; instead they can provide a complete user experience with a roadmap for continued ROI from applications that also creating more business and attracts the kudos and respect from other makers of apps as they're wowed by the results. So, if you're a partner and interested in attending one of these workshops and benefitting from such learning, as well as having a platform to show off some of your own work, stay well tuned to your OPN channels, to this blog, to the VoX blog, and to the @usableapps Twitter account too. Can't wait? For developers and partners, some key mobile resources to explore now Oracle ADF Mobile UX Patterns and Components Wiki Oracle ADF Academy (Mobile) Oracle ADF Insider Essentials Oracle Applications Mobile User Experience Design Patterns and Guidance

    Read the article

  • OS X: Finder error -36 when using SMB shares on a Samba server bound to AD

    - by Frenchie
    We're looking at deploying SMB homes on Debian (5.0.3) for our mac clients rather than purchasing four new Xserves. We've got our test servers built and functioning properly. Windows clients behave perfectly, but we've run into an issue with OS X (10.6.x and 10.5.x). We're going this route instead of Windows file servers due to a whole bunch of other issues that arise when going that way. Specifically, when mounting a SMB share with unix extensions switched on and the remote server bound to AD, the finder cannot save files on the share, instead touching the file and then bombing out with a -36 IO error, folder creation is fine. Copying files in the terminal behaves fine and the problem seems to be limited to the finder. The issue arises (I think) as the remote UID/GID is passed across when using unix extensions. OS X uses its own winbind idmap (odsam) to work out the effective UID/GID from AD users and groups whilst we're using a rid map on the server. Consequently, there is a mismatch in ownership which the finder chooses to honour. How OS X appears to handle this is to use the remote uid and gid at the file permission level (see below) and then set an OS X acl granting the local uid/gid to have the appropriate permissions on the file. I think the finder touches the file (which the kernel allows because of the ACL) and then checks the filesystem perms and drops out with the IO error. On a Client fc-003353-d:homes2 root# ls -led test/ drwx------+ 2 135978 100513 16384 Feb 3 15:14 test/ 0: user:jfrench allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit 1: group:ARTS\domain users allow 2: group:everyone allow 3: group:owner allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 4: group:group allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 5: group:everyone allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit We've tried the following without any luck: Setting the Linux side file owner to match the OS X GID/UID Adding ACLs on the linux filesystem which grant the OS X GID/UID perms Disabling extended attributes Setting steams=no in /etc/nsmb.conf on the client We're currently running a workaround which is to just turn off unix extensions which forces the macs to just mount the share as the local user with u=rwx perms. This works for most things but is causing a few apps that expect certain perms to break in subtle ways. Worst case scenario is that we'll continue running in this way but we would like to have the unix extensions on. Regards. Relevant SMB config below: [global] workgroup = ARTS realm = *snip* security = ADS password server = *snip* unix extensions = yes panic action = /usr/share/panic-action %d idmap backend = rid:ARTS=100000-10000000 idmap uid = 100000-10000000 idmap gid = 100000-10000000 winbind enum users = Yes winbind enum groups = Yes veto files = /lost+found/aquota.*/ hide files = /desktop.ini/$RECYCLE.BIN/.*/AppData/Library/ ea support = yes store dos attributes = yes map system = no map archive = no map readonly = no

    Read the article

  • networking tunnel adapter connections?

    - by Karthik Balaguru
    I understand that Tunnel Adapter LAN is for encapsulating IPv6 packets with an IPv4 header so that they can be sent across an IPv4 network. Few queries popped up in my mind based on this :- If i do 'ipconfig', Apart from ethernet adapter LAN details, I get a series of statments as below - Tunnel adapter Local Area Connection* 6 Tunnel adapter Local Area Connection* 7 Tunnel adapter Local Area Connection* 12 Tunnel adapter Local Area Connection* 13 Tunnel adapter Local Area Connection* 14 Tunnel adapter Local Area Connection* 15 Tunnel adapter Local Area Connection* 16 Except for the *16, all the other Tunnel Adapter Local Area Connections show Media Disconnected. Why is the numbering for the Tunnel adapter LAN not sequential? It is like 6, 7, 12, 13, 14, 15, 16. A strange numbering scheme! I tried to figure it out by thinking of some arithmetic series. But, it does not seem to fit in. There is a huge gap between 7 and 12. Any ideas? What is the need for so many Tunnel Adapter LAN connections? Can you tell me a scenario that requires all of those ? I did ipconfig /all to get more information. From the listing, I understand that: 16, 15, 14, 12 are Microsoft 6to4 Adapters 13, 6 are isatap Adapters 7 is Teredo Tunneling Pseudo-interface I understand that the above are for automatic tunneling so that the tunnel endpoints are determined automatically by the routing infrastructure. 6to4 is recommended by RFC3056 for automatic tunneling that uses protocol 41 for encapsulation. It is typically used when an end-user wants to connect to the IPv6 Internet using their existing IPv4 connection. Teredo is an automatic tunneling technique that uses UDP encapsulation across multiple NATs. That is, It is to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. That is to transmit IPv6 packets between dual-stack nodes on top of an IPv4 network. That is, to put in simple words, ISATAP is an intra-site mechanism, while the 6to4 and Teredo are for inter-site tunnelling mechanisms. It seems that Teredo should alone enabled by default in Vista, But my system does not show it to be enabled by default. Interestingly, it shows a 6to4 tunnel adapter (Tunnel adapter LAN connection 16) to be enabled by default? Any specific reasons for it? If i do ipconfig /all, why is only one Teredo present while four 6to4 are present ? I searched the internet for answers to the above queries, but I am unable to find clear answers.

    Read the article

  • Having troubles connectiong Magento to external Windows Database Server using Windows Azure

    - by Kevin H
    "I tried to make this easy to read through" I am using Ubuntu 12.04 LTS for Magento and installed these commands onto the system: sudo apt-get install apache2 sudo apt-get install php5 libapache2-mod-php5 sudo apt-get install php5-mysql sudo apt-get install php5-curl php5-mcrypt php5-gd php5-common sudo apt-get install php5-gd I used Windows Server 2008 R2 August 2012 for Mysql Server For a reference, I used http://www.windowsazure.com/en-us/manage/windows/common-tasks/install-mysql/ When the server was setup, I added an empty disk to it Then, I added endpoints 3306 Next I accessed the server remotely After that, I formatted the empty disk and was inserted as F: Next I downloaded Mysql from http://*.mysql.com version Windows (x86, 64-bit), MSI Installer 5.5.28 In the installation process, I used these settings: Typical Setup - Clicked Next, install, next Chose Detailed Configuration - Clicked next Chose Dedicated MySQL Server Machine - Clicked Next Chose Transactional Database Only - Clicked Next Chose the "F:" Drive - Clicked Next Chose Online Transactional Processing (OLTP) - Clicked Next For Networking Options, I checkmarked 'Enable TCP/IP Networking" 'Add firewall exception for this port' 'Enable Strict Mode' - Clicked Next Chose Standard Character Set - Clicked Next For Windows Options, I checkedmarked 'Install as Window Service" 'Launch the MySQL Server automatically' 'Include Bin Directory in Windows PATH - Clicked Next For Security Options, I checkmarked 'Modify Security Settings' and set root password - Clicked Next Finally clicked Execute and Finish These are the Firewall Setting that I set I clicked inbound rules Properties Scope Allow IP Address and used the internal Address for Magento Server Clicked Apply and exited Next, I opened up MySQL 5.x Command Line Client Entered Root Password Then entered these commands mysql create database magento; mysql Create user magentouser identified by 'password'; mysql Grant select, insert, create, alter, update, delete, lock tables on magento.* to magentouser mysql exit Finally, I opened up the Magento Downloader Magento validation has approved all PHP version is right. Your version is 5.3.10-1ubuntu3.4. PHP Extension curl is loaded PHP Extension dom is loaded PHP Extension gd is loaded PHP Extension hash is loaded PHP Extension iconv is loaded PHP Extension mcrypt is loaded PHP Extension pcre is loaded PHP Extension pdo is loaded PHP Extension pdo_mysql is loaded PHP Extension simplexml is loaded These are all installed on Magento Server For the Database Connection, I used: The Database server only has MySQL 5.5 Server installed on it Host - Internal IP address User Name - The User I created when setting up database Password - The Password I created when setting up database For the password, I did some research and found out that Magento only accepts alphanumeric, so I went and set it up again and used only alphanumeric for the User password Now, I am still getting Accessed denied for database Connection. Also, I have tryed to setup mysql on independant Linux Server but kept getting errors. When, I found the solution. Wouldn't work, so I decided to try Windows. These is the questions, I have been asking and researching to debug this issue Is it because I am using Linux for magento and Windows for Database. I have had no luck in finding a reason why this wouldn't work There must be something, I am missing I also researched the difference between linux sql databases and windows sql databases but have not come to conclusion, if installing Mysql on windows would make a difference in syntax and coding. I have spent a lot of time looking into this and need some help with direction on how to complete my project. Any type of help would be appreciated.

    Read the article

  • networking tunnel adapter connections?

    - by Karthik Balaguru
    I understand that Tunnel Adapter LAN is for encapsulating IPv6 packets with an IPv4 header so that they can be sent across an IPv4 network. Few queries popped up in my mind based on this :- If i do 'ipconfig', Apart from ethernet adapter LAN details, I get a series of statments as below - Tunnel adapter Local Area Connection* 6 Tunnel adapter Local Area Connection* 7 Tunnel adapter Local Area Connection* 12 Tunnel adapter Local Area Connection* 13 Tunnel adapter Local Area Connection* 14 Tunnel adapter Local Area Connection* 15 Tunnel adapter Local Area Connection* 16 Except for the *16, all the other Tunnel Adapter Local Area Connections show Media Disconnected. Why is the numbering for the Tunnel adapter LAN not sequential? It is like 6, 7, 12, 13, 14, 15, 16. A strange numbering scheme! I tried to figure it out by thinking of some arithmetic series. But, it does not seem to fit in. There is a huge gap between 7 and 12. Any ideas? What is the need for so many Tunnel Adapter LAN connections? Can you tell me a scenario that requires all of those ? I did ipconfig /all to get more information. From the listing, I understand that: 16, 15, 14, 12 are Microsoft 6to4 Adapters 13, 6 are isatap Adapters 7 is Teredo Tunneling Pseudo-interface I understand that the above are for automatic tunneling so that the tunnel endpoints are determined automatically by the routing infrastructure. 6to4 is recommended by RFC3056 for automatic tunneling that uses protocol 41 for encapsulation. It is typically used when an end-user wants to connect to the IPv6 Internet using their existing IPv4 connection. Teredo is an automatic tunneling technique that uses UDP encapsulation across multiple NATs. That is, It is to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. That is to transmit IPv6 packets between dual-stack nodes on top of an IPv4 network. That is, to put in simple words, ISATAP is an intra-site mechanism, while the 6to4 and Teredo are for inter-site tunnelling mechanisms. It seems that Teredo should alone enabled by default in Vista, But my system does not show it to be enabled by default. Interestingly, it shows a 6to4 tunnel adapter (Tunnel adapter LAN connection 16) to be enabled by default? Any specific reasons for it? If i do ipconfig /all, why is only one Teredo present while four 6to4 are present ? I searched the internet for answers to the above queries, but I am unable to find clear answers.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39  | Next Page >