Search Results

Search found 29574 results on 1183 pages for 'directory services'.

Page 90/1183 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Join ActiveDirectory (Win 2k8R2) to OpenDirectory(Snow Leopard)

    - by Tom O'Connor
    The vast majority of questions and so on regarding the interoperability of Active and Open directories involves getting Mac clients to see an AD and auth against it. What we'd like to do is get a Windows 7 workstation to auth completely against Open Directory. We tried setting it up as an NT4 type PDC, and that doesn't work satisfactorily. We tried using pGina and the LDAP backend, which allows Authentication, but has no support for Authorization, and as a result, if we mount an NFS Share, the user has the rights to do anything they damn well please. Not ideal for security (Totally bloody unacceptable, actually). We tried using a Samba server (newer version than on the Open Directory Server) as an intermediate, so that it knows about the LDAP server on the OD Server, but uses Samba 4 instead of v3. That didn't work either. We could login, but couldn't mount, and if we did, we had the same rights as with pGina. If we right-click the mounted drive in Windows, and have a look at NFS UID, it returns -2, not the correct (mapped) UID. So the final plan I've got is to use an Active Directory, inside a Windows 2008R2 Virtual Machine. What I want to achieve is to have the Active Directory sync it's user data from OpenDirectory (read-only would be fine). That way, we'd have the ability to connect Windows 7 clients to a "virtual domain" which would actually just grab information from OD's LDAP. All the information I've found is about how to go the other way. Does anyone know how we can do this?

    Read the article

  • Is there a quick way of undoing a folder change in Far Manager?

    - by Johannes Rössel
    I love Far Manager. However, it has a feature to quickly go to the root directory of a drive with Ctrl+\. I do sometimes need and use this feature, but more frequently I use Ctrl+? to quickly insert the file name under the cursor into the command line. As it so happens, the ? key is located dangerously close to \ which is why I sometimes erroneously go the root directory (which then is doubly unfortunate since I originally wanted to work with a file in the directory I was in). Now I could probably just redefine Ctrl+\ to do nothing, although I still sometimes need that (can be replicated with a quick cd\, though). But Windows Explorer, in the wake of the WWW, provided us with a handy directory history and two separate ways of navigating backwards: backwards through the history and backwards through the hierarchy. Is there something quick and easy to get back to the folder I were in? This is less of an issue in C:\Users\Me (still nagging) but more so in deeper hierarchies.

    Read the article

  • Installing Sql Server 2005 SP2 - Getting error on analysis services component

    - by Greg_the_Ant
    At first many of the components didn't install and I followed this workaround (fixing user/SID mappings in registry.) After that everything installed successfully except for analysis services. I am getting the exact same error message as before on analysis services. (Are there other users installed by sql server I'm not aware of perhaps?) Do you guys have any ideas? All of my searches seem to just point to that workaround above that I already did. Error message from log: Product : Analysis Services (MSSQLSERVER) Product Version (Previous): 1399 Product Version (Final) : Status : Failure Log File : C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\LOG\Hotfix\OLAP9_Hotfix_KB921896_sqlrun_as.msp.log Error Number : 29528 Error Description : MSP Error: 29528 The setup has encountered an unexpected error while Setting Internal Properties. The error is: Fatal error during installation.

    Read the article

  • Recommended website performance monitoring services? [closed]

    - by Dennis G.
    I'm looking for a good performance monitoring service for websites. I know about some of the available general monitoring services that check for uptime and notify you about unavailable services. But I'm specifically looking for a service with an emphasis on performance. I.e., I would like to see reports with detailed performance statistics from multiple locations world-wide, with a break-down on how long it took to fetch the different website resources, including third-party scripts such as Google Analytics and so on (the report should contain similar details such as the FireBug Net tab). Are there any such services and if so, which one is the best?

    Read the article

  • Putting our OLTP and OLAP services on the same cluster

    - by Dynamo
    We're currently in a bit of a debate about what to do with our scattered SQL environment. We are setting up a cluster for our data warehouses for sure and are now in the process of deciding if our OLTP databases should go on the same one. The cluster will be active/active with database services running on one node and reporting and analytical services on the other node. From a technical standpoint I don't see an issue here. With the services being run on different nodes they shouldn't compete too heavily for resources. The only physical resource that may be an issue would be the shared disk space. Our environment is also quite small. Our biggest OLAP database at the moment is only about 40GB and our OLTP are all under 10GB. I see a potential political issue here as different groups are involved but I'm just strictly wondering if there would be any major technical issues that could arise from this setup.

    Read the article

  • Is there any way to synchronize AD users with Office 365 but still be able to edit them online?

    - by Massimo
    I'm performing a migration to Office 365 from a third-party mail server (MDaemon); the local Active Directory doesn't include any Exchange server, and never had any. We will need directory synchronization in order to enable users to log on to Office 365 using their domain credentials; but it seems that as soon as you enable directory synchronization, you can't perform any action anymore on Office 365 users: all changes need to be made on the local Active Directory, and then replicated by the synchronization process. For ordinary users with a single e-mail address and standard features, this is not a big problem; but what about users which need an additional address? What if I need to configure some nonstandard setting, like "hide from address list" or a custom mailbox quota? From what I've gathered, the only supported way to do this, as you can't directly edit Office 365 objects anymore after synchronization is enabled, is to extend the local AD schema with Exchange attributes, and then manually edit them (!). Or, you can install at least one local Exchange server, and then use the Exchange administrative tools to configure the required settings. Is this correct or am I missing something? Is there any way to synchronize user accounts and password, but still be able to edit user settings directly in Office 365? If not (everything really needs to be set locally and then synchronized), is there any simpler way to do this than manually editing LDAP attributes or installing a local Exchange server?

    Read the article

  • Windows Azure: Major Updates for Mobile Backend Development

    - by ScottGu
    This week we released some great updates to Windows Azure that make it significantly easier to develop mobile applications that use the cloud. These new capabilities include: Mobile Services: Custom API support Mobile Services: Git Source Control support Mobile Services: Node.js NPM Module support Mobile Services: A .NET API via NuGet Mobile Services and Web Sites: Free 20MB SQL Database Option for Mobile Services and Web Sites Mobile Notification Hubs: Android Broadcast Push Notification Support All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Mobile Services: Custom APIs, Git Source Control, and NuGet Windows Azure Mobile Services provides the ability to easily stand up a mobile backend that can be used to support your Windows 8, Windows Phone, iOS, Android and HTML5 client applications.  Starting with the first preview we supported the ability to easily extend your data backend logic with server side scripting that executes as part of client-side CRUD operations against your cloud back data tables. With today’s update we are extending this support even further and introducing the ability for you to also create and expose Custom APIs from your Mobile Service backend, and easily publish them to your Mobile clients without having to associate them with a data table. This capability enables a whole set of new scenarios – including the ability to work with data sources other than SQL Databases (for example: Table Services or MongoDB), broker calls to 3rd party APIs, integrate with Windows Azure Queues or Service Bus, work with custom non-JSON payloads (e.g. Windows Periodic Notifications), route client requests to services back on-premises (e.g. with the new Windows Azure BizTalk Services), or simply implement functionality that doesn’t correspond to a database operation.  The custom APIs can be written in server-side JavaScript (using Node.js) and can use Node’s NPM packages.  We will also be adding support for custom APIs written using .NET in the future as well. Creating a Custom API Adding a custom API to an existing Mobile Service is super easy.  Using the Windows Azure Management Portal you can now simply click the new “API” tab with your Mobile Service, and then click the “Create a Custom API” button to create a new Custom API within it: Give the API whatever name you want to expose, and then choose the security permissions you’d like to apply to the HTTP methods you expose within it.  You can easily lock down the HTTP verbs to your Custom API to be available to anyone, only those who have a valid application key, only authenticated users, or administrators.  Mobile Services will then enforce these permissions without you having to write any code: When you click the ok button you’ll see the new API show up in the API list.  Selecting it will enable you to edit the default script that contains some placeholder functionality: Today’s release enables Custom APIs to be written using Node.js (we will support writing Custom APIs in .NET as well in a future release), and the Custom API programming model follows the Node.js convention for modules, which is to export functions to handle HTTP requests. The default script above exposes functionality for an HTTP POST request. To support a GET, simply change the export statement accordingly.  Below is an example of some code for reading and returning data from Windows Azure Table Storage using the Azure Node API: After saving the changes, you can now call this API from any Mobile Service client application (including Windows 8, Windows Phone, iOS, Android or HTML5 with CORS). Below is the code for how you could invoke the API asynchronously from a Windows Store application using .NET and the new InvokeApiAsync method, and data-bind the results to control within your XAML:     private async void RefreshTodoItems() {         var results = await App.MobileService.InvokeApiAsync<List<TodoItem>>("todos", HttpMethod.Get, parameters: null);         ListItems.ItemsSource = new ObservableCollection<TodoItem>(results);     }    Integrating authentication and authorization with Custom APIs is really easy with Mobile Services. Just like with data requests, custom API requests enjoy the same built-in authentication and authorization support of Mobile Services (including integration with Microsoft ID, Google, Facebook and Twitter authentication providers), and it also enables you to easily integrate your Custom API code with other Mobile Service capabilities like push notifications, logging, SQL, etc. Check out our new tutorials to learn more about to use new Custom API support, and starting adding them to your app today. Mobile Services: Git Source Control Support Today’s Mobile Services update also enables source control integration with Git.  The new source control support provides a Git repository as part your Mobile Service, and it includes all of your existing Mobile Service scripts and permissions. You can clone that git repository on your local machine, make changes to any of your scripts, and then easily deploy the mobile service to production using Git. This enables a really great developer workflow that works on any developer machine (Windows, Mac and Linux). To use the new support, navigate to the dashboard for your mobile service and select the Set up source control link: If this is your first time enabling Git within Windows Azure, you will be prompted to enter the credentials you want to use to access the repository: Once you configure this, you can switch to the configure tab of your Mobile Service and you will see a Git URL you can use to use your repository: You can use this URL to clone the repository locally from your favorite command line: > git clone https://scottgutodo.scm.azure-mobile.net/ScottGuToDo.git Below is the directory structure of the repository: As you can see, the repository contains a service folder with several subfolders. Custom API scripts and associated permissions appear under the api folder as .js and .json files respectively (the .json files persist a JSON representation of the security settings for your endpoints). Similarly, table scripts and table permissions appear as .js and .json files, but since table scripts are separate per CRUD operation, they follow the naming convention of <tablename>.<operationname>.js. Finally, scheduled job scripts appear in the scheduler folder, and the shared folder is provided as a convenient location for you to store code shared by multiple scripts and a few miscellaneous things such as the APNS feedback script. Lets modify the table script todos.js file so that we have slightly better error handling when an exception occurs when we query our Table service: todos.js tableService.queryEntities(query, function(error, todoItems){     if (error) {         console.error("Error querying table: " + error);         response.send(500);     } else {         response.send(200, todoItems);     }        }); Save these changes, and now back in the command line prompt commit the changes and push them to the Mobile Services: > git add . > git commit –m "better error handling in todos.js" > git push Once deployment of the changes is complete, they will take effect immediately, and you will also see the changes be reflected in the portal: With the new Source Control feature, we’re making it really easy for you to edit your mobile service locally and push changes in an atomic fashion without sacrificing ease of use in the Windows Azure Portal. Mobile Services: NPM Module Support The new Mobile Services source control support also allows you to add any Node.js module you need in the scripts beyond the fixed set provided by Mobile Services. For example, you can easily switch to use Mongo instead of Windows Azure table in our example above. Set up Mongo DB by either purchasing a MongoLab subscription (which provides MongoDB as a Service) via the Windows Azure Store or set it up yourself on a Virtual Machine (either Windows or Linux). Then go the service folder of your local git repository and run the following command: > npm install mongoose This will add the Mongoose module to your Mobile Service scripts.  After that you can use and reference the Mongoose module in your custom API scripts to access your Mongo database: var mongoose = require('mongoose'); var schema = mongoose.Schema({ text: String, completed: Boolean });   exports.get = function (request, response) {     mongoose.connect('<your Mongo connection string> ');     TodoItemModel = mongoose.model('todoitem', schema);     TodoItemModel.find(function (err, items) {         if (err) {             console.log('error:' + err);             return response.send(500);         }         response.send(200, items);     }); }; Don’t forget to push your changes to your mobile service once you are done > git add . > git commit –m "Switched to use Mongo Labs" > git push Now our Mobile Service app is using Mongo DB! Note, with today’s update usage of custom Node.js modules is limited to Custom API scripts only. We will enable it in all scripts (including data and custom CRON tasks) shortly. New Mobile Services NuGet package, including .NET 4.5 support A few months ago we announced a new pre-release version of the Mobile Services client SDK based on portable class libraries (PCL). Today, we are excited to announce that this new library is now a stable .NET client SDK for mobile services and is no longer a pre-release package. Today’s update includes full support for Windows Store, Windows Phone 7.x, and .NET 4.5, which allows developers to use Mobile Services from ASP.NET or WPF applications. You can install and use this package today via NuGet. Mobile Services and Web Sites: Free 20MB Database for Mobile Services and Web Sites Starting today, every customer of Windows Azure gets one Free 20MB database to use for 12 months free (for both dev/test and production) with Web Sites and Mobile Services. When creating a Mobile Service or a Web Site, simply chose the new “Create a new Free 20MB database” option to take advantage of it: You can use this free SQL Database together with the 10 free Web Sites and 10 free Mobile Services you get with your Windows Azure subscription, or from any other Windows Azure VM or Cloud Service. Notification Hubs: Android Broadcast Push Notification Support Earlier this year, we introduced a new capability in Windows Azure for sending broadcast push notifications at high scale: Notification Hubs. In the initial preview of Notification Hubs you could use this support with both iOS and Windows devices.  Today we’re excited to announce new Notification Hubs support for sending push notifications to Android devices as well. Push notifications are a vital component of mobile applications.  They are critical not only in consumer apps, where they are used to increase app engagement and usage, but also in enterprise apps where up-to-date information increases employee responsiveness to business events.  You can use Notification Hubs to send push notifications to devices from any type of app (a Mobile Service, Web Site, Cloud Service or Virtual Machine). Notification Hubs provide you with the following capabilities: Cross-platform Push Notifications Support. Notification Hubs provide a common API to send push notifications to iOS, Android, or Windows Store at once.  Your app can send notifications in platform specific formats or in a platform-independent way.  Efficient Multicast. Notification Hubs are optimized to enable push notification broadcast to thousands or millions of devices with low latency.  Your server back-end can fire one message into a Notification Hub, and millions of push notifications can automatically be delivered to your users.  Devices and apps can specify a number of per-user tags when registering with a Notification Hub. These tags do not need to be pre-provisioned or disposed, and provide a very easy way to send filtered notifications to an infinite number of users/devices with a single API call.   Extreme Scale. Notification Hubs enable you to reach millions of devices without you having to re-architect or shard your application.  The pub/sub routing mechanism allows you to broadcast notifications in a super-efficient way.  This makes it incredibly easy to route and deliver notification messages to millions of users without having to build your own routing infrastructure. Usable from any Backend App. Notification Hubs can be easily integrated into any back-end server app, whether it is a Mobile Service, a Web Site, a Cloud Service or an IAAS VM. It is easy to configure Notification Hubs to send push notifications to Android. Create a new Notification Hub within the Windows Azure Management Portal (New->App Services->Service Bus->Notification Hub): Then register for Google Cloud Messaging using https://code.google.com/apis/console and obtain your API key, then simply paste that key on the Configure tab of your Notification Hub management page under the Google Cloud Messaging Settings: Then just add code to the OnCreate method of your Android app’s MainActivity class to register the device with Notification Hubs: gcm = GoogleCloudMessaging.getInstance(this); String connectionString = "<your listen access connection string>"; hub = new NotificationHub("<your notification hub name>", connectionString, this); String regid = gcm.register(SENDER_ID); hub.register(regid, "myTag"); Now you can broadcast notification from your .NET backend (or Node, Java, or PHP) to any Windows Store, Android, or iOS device registered for “myTag” tag via a single API call (you can literally broadcast messages to millions of clients you have registered with just one API call): var hubClient = NotificationHubClient.CreateClientFromConnectionString(                   “<your connection string with full access>”,                   "<your notification hub name>"); hubClient.SendGcmNativeNotification("{ 'data' : {'msg' : 'Hello from Windows Azure!' } }", "myTag”); Notification Hubs provide an extremely scalable, cross-platform, push notification infrastructure that enables you to efficiently route push notification messages to millions of mobile users and devices.  It will make enabling your push notification logic significantly simpler and more scalable, and allow you to build even better apps with it. Learn more about Notification Hubs here on MSDN . Summary The above features are now live and available to start using immediately (note: some of the services are still in preview).  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Calling services from the Orchestrating layer in SOA?

    - by Martin Lee
    The Service Oriented Architecture Principles site says that Service Composition is an important thing in SOA. But Service Loose Coupling is important as well. Does that mean that the "Orchestrating layer" should be the only one that is allowed to make calls to services in the system? As I understand SOA, the "Orchestrating layer" 'glues' all the services together into one software application. I tried to depict that on Fig.A and Fig.B. The difference between the two is that on Fig.A the application is composed of services and all the logic is done in the "Orchestrating layer" (all calls to services are done from the "Orchestrating layer" only). On Fig.B the application is composed from services, but one service calls another service. Does the architecture on Fig.B violate the "Service Loose Coupling" principle of SOA? Can a service call another service in SOA? And more generally, can the architecture on Fig.A be considered superior to the one on Fig.B in terms of service loose coupling, abstraction, reusability, autonomy, etc.? My guess is that the A architecture is much more universal, but it can add some unnecessary data transfers between the "Orchestrating layer" and all the called services.

    Read the article

  • Common methods/implementation across multiple WCF Services

    - by Rob
    I'm looking at implementing some WCF Services as part of an API for 3rd parties to access data within a product I work on. There are currently a set of services exposed as "classic" .net Web Services and I need to emulate the behaviour of these, at least in part. The existing services all have an AcquireAuthenticationToken method that takes a set of parameters (username, password, etc) and return a session token (represented as a GUID), which is then passed in on calls to any other method (There's also a ReleaseAuthenticationToken method, no guesses needed as to what that does!). What I want to do is implement multiple WCF services, such as: ProductData UserData and have both of these services share a common implementation of Acquire/Release. From the base project that is created by VS2k8, it would appear I will start with, per service: public class ServiceName : IServiceName { } public interface IServiceName { } Therefore my questions would be: Will WCF tolerate me adding a base class to this, public class ServiceName : ServiceBase, IServiceName, or does the fact that there's an interface involved mean that won't work? If "No it won't work" to Question 1, could I change IServiceName so it extends another interface, IServiceBase, thus forcing the presence of Acquire/Release methods, but then having to supply the implementation in each service. Are 1 and 2 both really bad ideas and there's actually a much better solution that, knowing next to nothing about WCF, I just haven't thought of?

    Read the article

  • Meaning of directories on Unix and Unix like systems

    - by Luke
    I've been using Linux for a couple of years now but I still haven't figured out what the origin or meaning of some the directory names are on Unix and Unix like systems. E.g. what does etc stand for or var? Where does the opt name come from? And while we're on the topic anyway. Can someone give a clear explanation of what directory is best used for what. I sometimes get confused where certain software is installed or what the most appropriate directory is to install software into.

    Read the article

  • Good default for XDG_RUNTIME_DIR?

    - by cadrian
    The XDG Base Directory Specification is a very interesting spec for user directories. It also provides good default values, except for XDG_RUNTIME_DIR. Now I am writing a software that needs to create named pipes. It is a per-user client-server framework (there is a FIFO for the server and a FIFO per client). If XDG_RUNTIME_DIR is not defined, I am currently using a per-user subdirectory in /tmp — but it does not ensure all the specified conditions (viz. the paragraph starting with "The lifetime of the directory MUST be bound to the user being logged in…") Is /tmp/myserver-$USER good enough? Edit I saw elsewhere a few suggestions: . is quite unsatisfactory (at least because it is not an absolute path). I also saw /var/run/user/$USER — not bad, but that directory does not exist (at least on my box running a Debian testing)

    Read the article

  • Migrating Identity Providers - specifying a new users password hash.

    - by Stephen Denne
    We'd like to switch Identity Provider (and Web Access Manager), and also the user directory we use, but would like to do so without users needing to change their password. We currently have the SSHA of the passwords. I'm expecting to write code to perform the migration. I don't mind how complex the code has to be, rather my concern is whether such a migration is possible at all. MS Active Directory would be our preferred user store, but I believe that it can not have new users set up in it with a particular password hash. Is that correct? What user directory stores can be populated with users already set up with a SSHA password? What Identity Provider and Access Management products work with those stores?

    Read the article

  • Download from http server all directories,files and subdirectories and so on

    - by Jack
    I want to download from remote http server all files directories,files and so on. I found some solutions to ftp server,but doesn't work to http. Until now no luck with wget -r or -m. It download all direcotories in the root and the respective index.html. Not all files and sub-directory under such it(note the sub-directory may have another directory and so on) not sure on tags fix for me if needs. Note: I'm not a native english speaker,sorry for bad english.

    Read the article

  • Why touching "d_name" makes calls to readdir() fail?

    - by Sarah Mani
    Hi, I'm trying to write a little helper for Windows which eventually will accept a file extension as an argument and return the number of files of that kind in the current directory. To do so, I'm reading the file entries in the directories and after getting the extension I'd like to convert it to lowercase to compare it with the yet-to-add specified argument. When converting the extension to lowercase I found that touching even a duplicate string of the d_name variable will cause a strange behaviour, like no more calls to readdir are called. Here is the code I'm using right now (the commented code is preliminary) and outputs for a given directory: #include <ctype.h> #include <dirent.h> #include <stdio.h> #include <string.h> char * strrch(char *string, size_t elements, char character) { char *reverse = string + elements; while (--reverse != string) if (*reverse == character) return reverse; return NULL; } void test(char *string) { // Even being a duplicate will make it fail: char *str = strdup(string); printf("Strings: %s %s\n", string, str); *str = 'a'; printf("Strings: %s %s\n", string, str); //unsigned short int i = 0; //for (; str[i] != '\0', str++; i++) // str[i] = tolower((unsigned char) str[i]); //puts(str); } int main(int argc, char **argv) { DIR *directory; struct dirent *element; if (directory = opendir(".")) { while (element = readdir(directory)) test(strrch(element->d_name, element->d_namlen, '.')); closedir(directory); puts(NULL); } else puts("Couldn't open the directory.\n"); } Output without modifying the duplicate (modification and the second printf call commented): Strings: (null) (null) Strings: . . Strings: .exe .exe Strings: .pdf .pdf Strings: .c .c Strings: .ini .ini Strings: .pdf .pdf Strings: .pdf .pdf Strings: .pdf .pdf Strings: .flac .flac Strings: .FLAC .FLAC Strings: .lnk .lnk Strings: .URL .URL Output of the same directory (with the code above, with the 2 printfs): Strings: (null) (null) Is there anything wrong? Is it a compiler issue? I'm using GCC 4.4.3 in Windows (MinGW) right now. Thank you very much for your help. By the way, is there any other way to work with files and directories in a Windows environment not using the POSIX functions?

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • iPhone: Can access files in documents directory in Simulator, but not device

    - by Kevin Cupp
    Hi there! I'm writing an app that copies some contents of the bundle into the applications Document's directory, mainly images and media. I then access this media throughout the app from the Document's directory. This works totally fine in the Simulator, but not on the device. The assets just come up as null. I've done NSLog's and the paths to the files look correct, and I've confirmed that the files exist in the directory by dumping a file listing in the console. Any ideas? Thank you! EDIT Here's the code that copies to the Document's directory NSString *pathToPublicationDirectory = [NSString stringWithFormat:@"install/%d",[[[manifest objectAtIndex:i] valueForKey:@"publicationID"] intValue]]; NSString *manifestPath = [[NSBundle mainBundle] pathForResource:@"content" ofType:@"xml" inDirectory:pathToPublicationDirectory]; [self parsePublicationAt:manifestPath]; // Get actual bundle path to publication folder NSString *bundlePath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:pathToPublicationDirectory]; // Then build the destination path NSString *destinationPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"%d", [[[manifest objectAtIndex:i] valueForKey:@"publicationID"] intValue]]]; NSError *error = nil; // If it already exists in the documents directory, delete it if ([fileManager fileExistsAtPath:destinationPath]) { [fileManager removeItemAtPath:destinationPath error:&error]; } // Copy publication folder to documents directory [fileManager copyItemAtPath:bundlePath toPath:destinationPath error:&error]; I am figuring out the path to the docs directory with this method: - (NSString *)applicationDocumentsDirectory { return [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; } And here's an example of how I'm building a path to an image path = [NSString stringWithFormat:@"%@/%d/%@", [self applicationDocumentsDirectory], [[thisItem valueForKey:@"publicationID"] intValue], [thisItem valueForKey:@"coverImage"]];

    Read the article

  • Is it Possible to Query Multiple Databases with WCF Data Services?

    - by Mas
    I have data being inserted into multiple databases with the same schema. The multiple databases exist for performance reasons. I need to create a WCF service that a client can use to query the databases. However from the client's point of view, there is only 1 database. By this I mean when a client performs a query, it should query all databases and return the combined results. I also need to provide the flexibility for the client to define its own queries. Therefore I am looking into WCF Data Services, which provides the very nice functionality for client specified queries. So far, it seems that a DataService can only make a query to a single database. I found no override that would allow me to dispatch queries to multiple databases. Does anyone know if it is possible for a WCF Data Service to query against multiple databases with the same schema?

    Read the article

  • maven-release-plugin: Perform fails with 'working directory "...workspace\target\checkout\workspace"

    - by Ed
    Hi, I have maven project that fails when release:perform is called, though release;prepare works as expected. I have found the bug report (below) which certainly seems to resemble the issue I have but not entirely sure I understand the problem: MRELEASE516 The last few lines of output I get: [INFO] Executing: cmd.exe /X /C "p4 -d E:\hudson\jobs\myHudsonJob\workspace\target\checkout -p 10.20.0.38:1666 client -d myProjectWorkspace-MavenSCM-E:\hudson\jobs\myHudsonJob\workspace\target\checkout" [INFO] Executing goals 'deploy'... [WARNING] Base directory is a file. Using base directory as POM location. [WARNING] Maven will be executed in interactive mode, but no input stream has been configured for this MavenInvoker instance. [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error executing Maven. Working directory "E:\hudson\jobs\myHudsonJob\workspace\target\checkout\workspace" does not exist! From reading the bug report the possible cause of the error is related to my modules' structure, I've tried to outline it below: /workspace | |+ pom.xml (root pom whose parent is the build pom, | calling release:perform on this pom) | [Modules: moduleA and moduleB] | |- moduleA |+ pom.xml (parent is also build pom) |+ build/pom.xml (the build pom - no custom parent) |- moduleB |+ pom.xml (parent is build pom) It seems that the root pom should be in some common directory inside 'workspace' from the error but tried that and doesn't work, nor make sense as to why I need it. What does the warning Base directory is a file want me to do instead?! It then figures that the base directory is workspace which then means the working directory is not found...any ideas? Thanks in advance.

    Read the article

  • IHttpModule not being applied to virtual directory

    - by arthurdent510
    I have a network folder that is mapped to my iis app as a virtual directory and I'm trying to do some authentication for files that are located there with an ihttpmodule. I've verified that the ihttpmodule is firing properly for anything else in my app, just not the files located in virtual directory. Most of what I've found is that the directory can't be listed as an application (which it isn't), and everything should work. The other solution that I found was to add the the module tag to the tag, but that didn't seem to help either. Everything that I've found talks about stopping this from happening. So my question is what could be set that is causing this to not work? Is there a certain execute permission that needs to be set? Any other iis settings that could cause this? It is an mvc app, and this is how my directory structure is laid out: server/app <- my application folder server/app/content/downloads <- downloads is the virtual directory Do I have to add the virtual directory directly under my app directory? Is that part of the problem? I don't have direct control of the server my code is running on, so testing things out is a bit of a pain... so I was looking for some more thoughts before starting to send emails off to my operations people. Thanks!

    Read the article

  • how to synchronize database table and directory with php

    - by twmulloy
    hello, I have a directory with files and a database table with what should be the same files. I would like to be able to synchronize the database table with the directory. What would be the most efficient way to do this? or would I realistically only be able to do this in a brute manner? Here's my approach: 1. retrieve all of the files in the directory as array 2. retrieve all of the filenames in the database table as array 3. loop through the file values in the directory array and use in_array() on the database table array to verify the filename is in that array, and if not then start building an array to insert the missing filenames. run db query to add each missing file row to database table 4. loop through directory array and use in_array() on the directory array and anything not found in the directory array will just be deleted from the table. Is there a better way to go about this? or something better for this in php than in_array()?

    Read the article

  • What to keep in mind when creating your own custom web services API?

    - by John Conde
    I have created a website which allows users to sign up for, and use, an online service. To help promote the website we will be have resellers who will be offering their own branded services through us. The initial plan is to allow resellers to place registration, login, and lost password forms on their own website and use an API created by us to handle these requests. I have begun outlining how I expect the API to work (and starting documenting it as well) and I want to make sure I get it right, or as close to right, as I can from the beginning as I know once you have declared a public API you want to avoid changing that API at all costs. So far I have decided: To have the user pass their account credentials with each request To require SSL for all requests What else should I be keeping in mind?

    Read the article

  • Best practice when working with web services that return objects?

    - by Raj
    Hello, I'm currently working with web services that return objects such as a list of files e.g. File array. I wanted to know whether its best practice to bind this type of object directly to my front end code for example a repeater/listview or whether to first parse it into my own list of "file class" e.g. customFiles[] If the web service changes then it will break my front end code, however if I create my own CustomFile class, then i would only need to change my code in one place to fix the issue, but it just seems like a lot of extra work to create the same classes from a web service, i wanted to know what is the best practice for this type of work. thanks Raj.

    Read the article

  • Security for web services only used from a Silverlight application?

    - by Lasse V. Karlsen
    I have googled a bit for how I should handle security in a web service application when the application is basically the data repository for a Silverlight application, but have gotten inconclusive results. The Silverlight application is not supposed to have its own user authentication, since it will be reachable only through a web application that the user have already authenticated to get into. As such, I was thinking I could simply add a parameter to the SL application that is a cookie-type value, with a certain lifetime, linked to the user in the database. The SL application would then have to pass this value alongside other parameters to the web services. Since the web service is hopefully going to be a generic web service endpoint, few methods, adding an extra parameter at this level will not be a problem. But, am I supposed to roll this system on my own? It sounds to me as this isn't exactly new features that nobody has considered before, so what are my options?

    Read the article

  • Are self-described / auto-descriptive services loosely or tightly coupled in a SOA architecture ?

    - by snowflake
    I consider a self-described / auto-descriptive service as a good thing in a SOA architecture, since (almost) everything you know to call the service is present in the service contract (such a WSDL). Sample of a non self-described service for me is Facebook Query Language (FQL http://wiki.developers.facebook.com/index.php/FQL), or any web service exchanging XML flow in a one String parameter for then parsing XML and performing treatments. Last ones seem further more technically decoupled, since technically you can switch implementations without technical impact on the caller, handling compatibility between implementations/versions at a business level. On the other side, having no strong interface (diluted into the service and its version), make the service tightly coupled to the existing implementation (more difficulty to interchange the service and to ensure perfect compatibility). This question is related to http://stackoverflow.com/questions/2503071/how-to-implement-loose-coupling-with-a-soa-architecture So, are self-described / auto-descriptive services loosely or tightly coupled in a SOA architecture ? What are the impacts regarding ESBs ? Any pointer will be appreciated.

    Read the article

  • Can a SQL Server 2008 database support both a REST and SOAP web services within two different endpoints?

    - by PaulDecember
    Say you have a SQL Server 2008 database. You build a SOAP web service. You then deploy or publish this using Visual Studio 2010 in one website. Now, using the same database, you build a REST web service, in a different solution. You deploy this on another website. Can you consume the endpoints and/or .svc file of both the SOAP and REST web services, though they reference the same SQL Server 2008 database? I don't see why not, but before I go down this path and spend days I'd like to make sure. Also if there's a performance hit to the database if it is running both SOAP and REST at the same time--again, I don't see why it would matter, but I must make sure. Thanks.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >