Search Results

Search found 29959 results on 1199 pages for 'enterprise development'.

Page 36/1199 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Hosted Mac OS X/iPhone development

    - by dwj
    I want to try my hand at developing for the iPhone but I don't have an Intel-based Mac available to me; likewise, my budget doesn't include provisions for getting one anytime soon. I've tried messing around with winchain and that hasn't gone too well. I'm not interested in jail-breaking my phone and installing other tools for developing. I've read posts on using older macs with the development tool-kit but haven't tried it yet. In the end I just want to know I can compile without doing a ton of extra work or finding work-arounds. I don't mind only having access to a CLI compiler. Does anyone know of a hosting service that provides a shell access account on Intel-based Mac OS X machines?

    Read the article

  • Lightweight .NET web development?

    - by Breck Fresen
    Fellow stack overflowers, I'm currently working on a project that has a sizable amount of both client and web code. The client code is written in C# and the web piece is written in PHP. Maintaining consistency between the two worlds is becoming cumbersome, and I want consolidate the web code to .Net. The issue is that I hate web development in ASP.Net Web Forms. I want something as raw as PHP, just using C# instead. I've read a little about ASP.Net MVC, but it looks like it abstracts too much of the request logic for my liking. Does anyone know of a lightweight way to allow C# + .Net to handle web requests? Should I be looking more closely at MVC? Thanks in advance, -- Breck

    Read the article

  • Ideas for web development practical jokes?

    - by Ellie P.
    I am a web developer for a Django-based site for a student organization, and I have the opportunity to make the website temporarily absurd for a day of general campus-wide debauchery and chaos (long story, doesn't matter.) What are your best ideas for web development practical jokes (that you could never use in the real world)? For example, one idea we had was to use a client-side script to convert each character to its upside down equivalent in Unicode, si?? ??i? ?ui????os. I'm not necessarily looking for Django-specific solutions. I imagine most of these things would happen on the front-end. I am also quite aware that usability will suffer considerably--the point is to be fun for a day, and there will always be a link to the normal version of the site. Also, everything must be relatively cosmetic and easily reversible--I'm happy to swap out static CSS/JS/HTML/templates/images, and even temporarily add a django view, but no messing with the data level!

    Read the article

  • Cost of Design vs Development

    - by Marco
    I usually develop content management solutions in php, I work with designers that create the front end (html & css) and I usually develop the backend (php & mysql) of said cms. I know that the cost of the website may vary depending of the complexity of the html or the backend, but taking as reference a very basic website, what percentage of the cost should go to design and what percentage should go to development. I want to make clear that I as the developer i'm in charge of all the effects and animations of the websites using some of the javascript frameworks, not the designer. I just want to know what´s the fair share for me and for the designer.

    Read the article

  • Mac OS X Development.

    - by j-t-s
    Hi All, I would love to begin developing applications for the MAC OS X, although I have to idea where to start. Problem: I currently do not, and cannot afford a new MAC OS X-based computer. Solution: A very good friend loves trying out things that I've made for Windows, and also owns a MAC OS X computer, and is willing to test these new creations. Now I am faced with another problem. I don't know which language to use to develop these apps in. I am a .NET Developer and seeing as though I can only use a Windows based PC to develop MAC apps, where should I start? I've heard of Mono, and have used it on Linux before, would Mono be an option for MAC development on a Windows based computer, too? Are there any other ways around this? Any help is appreciated. :) Thank you jt

    Read the article

  • Is wordpress a good platform for webapp development?

    - by darlinton
    I am planning a webapp to control bank paying orders. In a quick review, the user goes online and creates a payment order. This order goes to other people that pays it and register the payment on the system. The system keeps track of all the payments, keeping the account balance up-to-date. The system needs a login system, bank integration, and to support at some point thousands of clients. We can find articles on the web about the benefits of using wordpress platform to build webapps. However, I could not find discussion with counterarguments to user wordpress. As the platform the most important choice in webapp project, I would to know more about the pitfalls and harms for choosing wordpress. The question is: What are the benefits and harms for choosing wordpress as a development platform for a webapp that need to be integrated with other system (backend systems) and to handle thousands of users (does it scale up?)?

    Read the article

  • Client Side Development - In Process/Completed Indicator Preferences?

    - by Brian
    Hello, I have been doing more client-side development, managing the UI on the client and submitting data to the server via web service calls. I'm not looking for implementation details, but was curious on developer preferences for displaying an operation in process and what to display when completed or even failed. As a for instance, just for clarification sake, what if you are submitting a profile form's data to a web service. I want to display that something's happening to the user, and give them a message that the form submitted successfully. I've in the past used a twitter-style message (that appears at the top), modal dialogs... I was curious what worked for others and any advice (what did the users like/not like, etc.). Again, technical details aren't needed. Thanks.

    Read the article

  • Which development Language is best suited to Network Inventory

    - by dastardlyandmuttley
    Dear stackoverflow I hope this is the corrcet type of question for stackoverflow to consider I would like to develop a "Hard Core" application that performs Network Inventory. High level requirements are Work on Windows and UNIX networks it has to be extremly performant it has to be 100% accuarate (massively) scalable and fun to write The sort of details I am after is manufacturer and versions of all major workstation hardware components such as motherboard, network card, sound card, hard drives, optical drives, memory, BIOS details, operating system information etc. I dont want to have to distribute a client on each workstation to collect the information although i will require automatic worksattion discovery I would value your thoughts on the best development language to employ I know there are products such as NEWT and stuff like nmap... I would like to do this type of technical programming myself "from scratch" Warm Regards DD

    Read the article

  • Collaborative editing for .NET development - what are the possibilities

    - by Olav
    What are the best options for real-time collaborative editing for .NET development? (C#,VB.NET, ASP.NET - not Mono unless it is the best way to get collaboration) 1) Anything possible with visual studio? 2) Collaborative editors? I know Eclipse has real-time collaboration, but I don't know how far you can combine it with .NET support. 3) Web-based tools? 4) Desktop sharing tools like VNC, NX etc. The main points is that 2 developers in different locations should be able to see edits in real time. Both should be able to edit, or it should be easy to switch control. Regarding .NET, syntax highlighting etc is better than nothing.

    Read the article

  • Is Software Development a Part of IT

    - by kzh
    I am not sure if this question is in the scope of SO, but I will ask anyway... I am currently going to school, majoring in Computer Science. Often I overhear students in the Management of Information Systems major call themselves programmers. These comments make me angry for a few reasons: They seem to trivialize what I do. Most of them are not even capable of doing what I can do. To me, MIS is IT and the are technicians and software developers are engineers and are not IT. So I guess my question is, Is software development part of IT? I often see them lumped together.

    Read the article

  • Swapping an image during web development

    - by detly
    I'm trying to see what a certain webpage would look like if I replaced a certain image with another. Rather than upload the image, edit the site, etc, each time I tweak it, I'd like to know if there's a way to change the image in the page to my local version while viewing the remote page. I use Firebug for debugging web development usually, but I'm open to any other tool that might do this. (It is absolutely impossible to search for this and find anything but questions about dynamic image swapping on a deployed website, so sorry if this is a duplicate.)

    Read the article

  • RoR app running on mongrel development but not production

    - by ANaimi
    Hello all, This is my first stab at Ruby on Rails. Just deployed a very simple app to Heroku. The thing is that my app runs flawlessly on mongrel development; When I run it with "mongrel_rails start -e production" however, I get the error "We're sorry, but something went wrong." For the life of me, I couldn't debug this. Heroku logs is not returning anything, the Exceptional addon in Heroku is not returning anything, and I cannot find mongrel.log on my Windows machine (when I run mongrel using: mongrel_rails start -e production -d" I'm using Rails 2.3.5 and sqlite3 with bundler to pack my gems. I was told that probably rails is not booting up correctly. I can't find any other way to diagnose this. Any ideas? Thanks, ANaimi

    Read the article

  • Best Technologies for AJAX Web Development

    - by floatingfrisbee
    Hey Everyone, I have some experience in AJAX development, mostly on .NET and MooTools. However, I want to learn more and see what others out there thought about the various other options available. I am looking more for advise about the front end. The back end, I will most probably be coding up in .NET using c# and WCF services. Please feel free to provide me as much information as you can. Also I would appreciate any links to resources. List of Options (feel free to add) Write my own Javascript Use a frame work like MooTools, JQuery etc. Which one is better? Use Google Web Toolkit. Am I tying myself to limitations of GWT? Or are there not limitations? ASP.NET AJAX WPF (Will this run on non IE browsers?) Flash (it'll be a pain to learn action script) Thanks Jaspreet

    Read the article

  • How do I learn Flash Game Development?

    - by grokker
    I'm currently a PHP programmer and one of my childhood dreams is to create a game. The problem is I don't know Flash. I'm not great at drawing stuff or even artistic. I could program a little with JavaScript and I could consider myself intermediate with JQuery. Question How do I get started with Flash Game development? What books do I read first? The type of game is a side scroller about an Indiana Jones type of character and the setting is on the jungle with trees and snakes and a lot of animals.

    Read the article

  • Stop Rails from unloading a module in development mode

    - by Gareth
    I have a module in my Rails app that lives in /lib module MyModule mattr_accessor :the_variable class << self def setup yield this end end end From my environments/#{RAILS_ENV}.rb file I can then set an environment-specific value for the_variable: MyModule.setup do |my_module_config| my_module_config.the_variable = 42 end This is lovely, and it seems to work (almost) fine. The problem is that in development mode, Rails via ActiveSupport::Dependencies unloads a load of modules, and reloads them in time for the new request. This is usually a great behaviour because it means you don't need to restart your localhost server when you make a code change. However, this also clears out my initialised the_variable variable, and when the next request comes in the initialiser (obviously) isn't run again. The net effect is that subsequent requests end up having MyModule.the_variable set to nil rather than the 42 that I'm looking for. I'm trying to work out how to stop Rails unloading my module at the end of the request, or alternatively find another way to cleanly provide environment specific configuration for my modules. Any ideas? :-/

    Read the article

  • Java technologies for web-development.

    - by Alex
    Hello. I'm PHP-programmer, but I'm extremely interested in learning Java. So I decided to change speciality from PHP to Java. At the moment I have an opportunity to try to make quite simple web-application (it should contain 2-3 forms, several pages with information from the database and authorization module) and also I have a chance to choose any technology I want. Besides I have about 3 months for this task. I've decided to develop site with Java technologies for the purpose of studying. I've already read a book about Java ("Java2 Complete Reference" by P.Naughton) and currently I'm reading "Thinking in Java" by B.Eckel. I clearly understand it's not enough for efficient development, but I want, at least, to try. I would be very appreciated for the advises, which framework (for example) or technology to choose (Spring, Grails etc.) and what primary aspects and technologies of Java should I pay attention to? Thank you in advance.

    Read the article

  • Development life-cycle for making an application?

    - by Mohit Deshpande
    I have an idea that I want to make into an application (I have a C/C++, C#, and Java programming background so I will be developing in QT Creator for cross-compilation's sake). So now I am asking you senior developers, what should I do next? I know that all good programs come from an idea. Then what should I do? Prototype the UI? Then develop the code? Is there like a circle of the development of an application? I DO NOT MEAN FOR THIS QUESTION TO BE SUBJECTIVE OR ARGUMENTATIVE

    Read the article

  • solutions for rapid front-end development?

    - by fayer
    im using mvc framework and i have learned some techniques that help me with different parts of RAD. models: doctrine/visual paradigm controllers/libraries: various design patterns now i only need to know what technique/solution i should use for the views so that i can create views more rapidly. cause i don't think it's efficient to code css/html manually, even though i understand it. its the same principle when using visual paradigm to create both my mysql database tables and doctrine model classes. i believe in using right tools will boost up development speed. so what could i use for the views to save time and energy and don't reinvent the wheel all the time? dreamweaver? any css generation tools? 960/blueprint for layout? suggestions? thanks

    Read the article

  • Apple Mac Software Development

    - by MattMorgs
    I'm planning on developing an Apple Mac application which will collect hardware information from the host Mac and also installed software info. The hardware and software info will be collected in an encrypted XML file and then posted back to a website. The application should run as a "service" or background process on the Mac and can be configured to collect the data on a frequent basis defined by another encrypted XML config file. I've done plenty of Windows based software development but never on the Mac. Can anybody point me in the direction of any useful info on how to develop on the Mac, collect hardware and software info, export to an XML file, file encryption and packaging a compiled app to run as a service? Is either Objective C, Cocoa or Ruby a possible option? Many thanks for your help in advance!

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • OBIEE 11.1.1 - (Updated) Best Practices Guide for Tuning Oracle® Business Intelligence Enterprise Edition (Whitepaper)

    - by Ahmed Awan
    Applies To: This whitepaper applies to OBIEE release 11.1.1.3, 11.1.1.5 and 11.1.1.6 Introduction: One of the most challenging aspects of performance tuning is knowing where to begin. To maximize Oracle® Business Intelligence Enterprise Edition performance, you need to monitor, analyze, and tune all the Fusion Middleware / BI components. This guide describes the tools that you can use to monitor performance and the techniques for optimizing the performance of Oracle® Business Intelligence Enterprise Edition components. Click to Download the OBIEE Infrastructure Tuning Whitepaper (Right click or option-click the link and choose "Save As..." to download this file) Disclaimer: All tuning information stated in this guide is only for orientation, every modification has to be tested and its impact should be monitored and analyzed. Before implementing any of the tuning settings, it is recommended to carry out end to end performance testing that will also include to obtain baseline performance data for the default configurations, make incremental changes to the tuning settings and then collect performance data. Otherwise it may worse the system performance.

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • How to install Revolution R Enterprise?

    - by Abe
    Revolution R Enterprise is available as a red-hat rpm file. Normally I would use alien to install an rpm file, but the instructions for installing this package have an install.py file that I am supposed to execute. When I ./install.py, I get the following instructions: rpm: please use alien to install rpm packages on Debian, if you are really sure use --force-debian switch. See README.Debian for more details. There is no README.Debian file in the directory, and although I am not proficient in python, I can tell that there are at least four different directories with *rpm files in them. Has anyone had success with this? If possible, I'd prefer to install the Enterprise version instead of community version in the Ubuntu repository so that I can test it out.

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >