Search Results

Search found 18235 results on 730 pages for 'ad certificate services'.

Page 86/730 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • How to query AD to get name email from lan id

    - by Kumar
    I have some code in asp.net ( kindly given by someone else ) to query AD to get user name and email etc. using System.DirectoryServices; using System.DirectoryServices.ActiveDirectory; using ActiveDs; DirectorySearcher search = new DirectorySearcher(new DirectoryEntry(), string.Format("(samaccountname={0})", id)); if (search == null) return id; if (search.FindOne() == null) return id; DirectoryEntry usr = search.FindOne().GetDirectoryEntry(); IADsUser oUsr = (IADsUser)usr.NativeObject; return string.Format("{0} {1}", usr.Properties["givenname"].Value, usr.Properties["sn"].Value); However this requires impersonation with an id that's required to be changed every 2 weeks and then updated in the web.config which is often forgotten Is there any non impersonation code to achieve the same result ? UPDATE - it's a config tool and it looks up name, email id etc. I like the service a/c idea Q - How is it possible to run ( impersonate ) just the AD code with a "service" a/c ? any samples/code ? how do you impersona

    Read the article

  • Calling services from the Orchestrating layer in SOA?

    - by Martin Lee
    The Service Oriented Architecture Principles site says that Service Composition is an important thing in SOA. But Service Loose Coupling is important as well. Does that mean that the "Orchestrating layer" should be the only one that is allowed to make calls to services in the system? As I understand SOA, the "Orchestrating layer" 'glues' all the services together into one software application. I tried to depict that on Fig.A and Fig.B. The difference between the two is that on Fig.A the application is composed of services and all the logic is done in the "Orchestrating layer" (all calls to services are done from the "Orchestrating layer" only). On Fig.B the application is composed from services, but one service calls another service. Does the architecture on Fig.B violate the "Service Loose Coupling" principle of SOA? Can a service call another service in SOA? And more generally, can the architecture on Fig.A be considered superior to the one on Fig.B in terms of service loose coupling, abstraction, reusability, autonomy, etc.? My guess is that the A architecture is much more universal, but it can add some unnecessary data transfers between the "Orchestrating layer" and all the called services.

    Read the article

  • UDP sockets in ad hoc network (Ubuntu 9.10)

    - by Ekhiotz
    Hi! I am using BSD sockets in Ubuntu 9.10 to send UDP packets in broadcast with the following code: sock_fd = socket(PF_INET,SOCK_DGRAM,IPPROTO_UDP); //sock_fd=socket(AF_INET,SOCK_DGRAM,0); receiver_addr.sin_family = PF_INET; //does not send with broadcast in ad hoc receiver_addr.sin_addr.s_addr = htonl(INADDR_BROADCAST); inet_aton("169.254.255.255",&receiver_addr.sin_addr); receiver_addr.sin_port = htons(port); int broadcast = 1; // this call is what allows broadcast packets to be sent: if (setsockopt(sock_fd, SOL_SOCKET, SO_BROADCAST, &broadcast, sizeof broadcast) == -1) { perror("setsockopt (SO_BROADCAST)"); exit(1); } ret=sendto(sock_fd, packet, size, 0,(struct sockaddr*)&receiver_addr,sizeof(receiver_addr)); Note that is not all the code, it is only to have an idea. The program sends all the data with INADDR_BROADCAST if I am connected to an infrastructure wireless network. However, if my laptop is connected to an ad-hoc network, it is able to receive all the data, but not to send it. I have solved the problem using the 169.254.255.255 broadcast address, but I would like to know what is going on. Thank you in advance!

    Read the article

  • A local error has occurred while connecting to AD in Windows 2008 server

    - by Sara
    There's Active directory on windows 2000 advance server, I have a web server on Windows 2008 server Enterprise Edition, the following code works fine in Winsows 2003 server but when I installed Win 2008 server, it gives me the following error, the webserver is not subdomain of the AD server. but they have the same range IP address. A local error has occurred.\r\n"} System.Exception system.DirectoryServices.DirectoryServicesCOMException} I want to Authenticate Via AD from my webserver, I even test the port 389 and it was open(by telnet), I even added port 389 UDP and TCP to firewall of webserver to be sure it is open, even I turned the firewall off but nothing changed. I don't know what's wrong with Windows 2008 server that cannot run my code, I search Internet but I found nothing. any solution would be helpful. Thank you public bool IsAuthenticated(string username, string pwd,string group) { string domainAndUsername = "LDAP://192.xx.xx.xx:389/DC=test,DC=oc,DC=com" ; string usr="CN=" + username + ",CN=" + group; DirectoryEntry entry = new DirectoryEntry(domainAndUsername, usr, pwd, AuthenticationTypes.Secure ); try { DirectorySearcher search = new DirectorySearcher(entry); search.Filter = "(SAMAccountName=" + username + ")"; SearchResult result = search.FindOne(); if (result == null) { return false; } } catch (Exception ex) { return false; } return true; }

    Read the article

  • Locating SSL certificate, key and CA on server

    - by jovan
    Disclaimer: you don't need to know Node to answer this question but it would help. I have a Node server and I need to make it work with HTTPS. As I researched around the internet, I found that I have to do something like this: var fs = require('fs'); var credentials = { key: fs.readFileSync('path/to/ssl/private-key'), cert: fs.readFileSync('path/to/ssl/cert'), ca: fs.readFileSync('path/to/something/called/CA') }; var app = require('https').createServer(credentials, handler); I have several problems with this. First off, all the examples I found use completely different approaches. Some link to .pem files for both the certificate and key. I don't know what pem files are but I know my certificate is .crt and my key is .key. Some start off at the root folder and some seem to just have these .pem files in the application directory. I don't. Some use the ca thing too and some don't. This CA is supposed to be my domain's CA bundle according to some articles - but none explain where to find this file. In the ssl directory on my server I have one .crt file in the certs directory and one .key file in the keys directory, in addition to an empty csrs directory and an ssl.db file. So, where do I find these 3 files (key, cert, ca) and how do I link to them correctly?

    Read the article

  • Common methods/implementation across multiple WCF Services

    - by Rob
    I'm looking at implementing some WCF Services as part of an API for 3rd parties to access data within a product I work on. There are currently a set of services exposed as "classic" .net Web Services and I need to emulate the behaviour of these, at least in part. The existing services all have an AcquireAuthenticationToken method that takes a set of parameters (username, password, etc) and return a session token (represented as a GUID), which is then passed in on calls to any other method (There's also a ReleaseAuthenticationToken method, no guesses needed as to what that does!). What I want to do is implement multiple WCF services, such as: ProductData UserData and have both of these services share a common implementation of Acquire/Release. From the base project that is created by VS2k8, it would appear I will start with, per service: public class ServiceName : IServiceName { } public interface IServiceName { } Therefore my questions would be: Will WCF tolerate me adding a base class to this, public class ServiceName : ServiceBase, IServiceName, or does the fact that there's an interface involved mean that won't work? If "No it won't work" to Question 1, could I change IServiceName so it extends another interface, IServiceBase, thus forcing the presence of Acquire/Release methods, but then having to supply the implementation in each service. Are 1 and 2 both really bad ideas and there's actually a much better solution that, knowing next to nothing about WCF, I just haven't thought of?

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Passing Certificate to Svcutil to generate proxy for OSB Service

    - by webwires
    We are wanting to implement Two-Way SSL security from WCF to OSB Services. We have successfully deployed the certificates so that when you browse to the service with IE you get the appropriate prompt for certificate and then it takes you immediately to the WSDL. But, when you attempt to generate a proxy using svcutil as defined in steps 8 and 9 in this MSDN article. http://msdn.microsoft.com/en-us/library/cc949005.aspx I get the error: A reply message was received for operation 'Get' with action 'http://schemas.xmlsoap.org/ws/2004/09/transfer/Get'. However, your client code requires action 'http://schemas.xmlsoap.org/ws/2004/09/transfer/GetResponse'. The OSB services are set to use Soap 1.2 and the svcutil.exe.config we use is identicle to the article except for the findValue and x509FindType. Instead we used the FindByThumbprint pointing to the "My" store name and "CurrentUser" store location. The cert is there and is the same cert we select from the IE prompt.

    Read the article

  • Is it Possible to Query Multiple Databases with WCF Data Services?

    - by Mas
    I have data being inserted into multiple databases with the same schema. The multiple databases exist for performance reasons. I need to create a WCF service that a client can use to query the databases. However from the client's point of view, there is only 1 database. By this I mean when a client performs a query, it should query all databases and return the combined results. I also need to provide the flexibility for the client to define its own queries. Therefore I am looking into WCF Data Services, which provides the very nice functionality for client specified queries. So far, it seems that a DataService can only make a query to a single database. I found no override that would allow me to dispatch queries to multiple databases. Does anyone know if it is possible for a WCF Data Service to query against multiple databases with the same schema?

    Read the article

  • What to keep in mind when creating your own custom web services API?

    - by John Conde
    I have created a website which allows users to sign up for, and use, an online service. To help promote the website we will be have resellers who will be offering their own branded services through us. The initial plan is to allow resellers to place registration, login, and lost password forms on their own website and use an API created by us to handle these requests. I have begun outlining how I expect the API to work (and starting documenting it as well) and I want to make sure I get it right, or as close to right, as I can from the beginning as I know once you have declared a public API you want to avoid changing that API at all costs. So far I have decided: To have the user pass their account credentials with each request To require SSL for all requests What else should I be keeping in mind?

    Read the article

  • Best practice when working with web services that return objects?

    - by Raj
    Hello, I'm currently working with web services that return objects such as a list of files e.g. File array. I wanted to know whether its best practice to bind this type of object directly to my front end code for example a repeater/listview or whether to first parse it into my own list of "file class" e.g. customFiles[] If the web service changes then it will break my front end code, however if I create my own CustomFile class, then i would only need to change my code in one place to fix the issue, but it just seems like a lot of extra work to create the same classes from a web service, i wanted to know what is the best practice for this type of work. thanks Raj.

    Read the article

  • Security for web services only used from a Silverlight application?

    - by Lasse V. Karlsen
    I have googled a bit for how I should handle security in a web service application when the application is basically the data repository for a Silverlight application, but have gotten inconclusive results. The Silverlight application is not supposed to have its own user authentication, since it will be reachable only through a web application that the user have already authenticated to get into. As such, I was thinking I could simply add a parameter to the SL application that is a cookie-type value, with a certain lifetime, linked to the user in the database. The SL application would then have to pass this value alongside other parameters to the web services. Since the web service is hopefully going to be a generic web service endpoint, few methods, adding an extra parameter at this level will not be a problem. But, am I supposed to roll this system on my own? It sounds to me as this isn't exactly new features that nobody has considered before, so what are my options?

    Read the article

  • Are self-described / auto-descriptive services loosely or tightly coupled in a SOA architecture ?

    - by snowflake
    I consider a self-described / auto-descriptive service as a good thing in a SOA architecture, since (almost) everything you know to call the service is present in the service contract (such a WSDL). Sample of a non self-described service for me is Facebook Query Language (FQL http://wiki.developers.facebook.com/index.php/FQL), or any web service exchanging XML flow in a one String parameter for then parsing XML and performing treatments. Last ones seem further more technically decoupled, since technically you can switch implementations without technical impact on the caller, handling compatibility between implementations/versions at a business level. On the other side, having no strong interface (diluted into the service and its version), make the service tightly coupled to the existing implementation (more difficulty to interchange the service and to ensure perfect compatibility). This question is related to http://stackoverflow.com/questions/2503071/how-to-implement-loose-coupling-with-a-soa-architecture So, are self-described / auto-descriptive services loosely or tightly coupled in a SOA architecture ? What are the impacts regarding ESBs ? Any pointer will be appreciated.

    Read the article

  • Can a SQL Server 2008 database support both a REST and SOAP web services within two different endpoints?

    - by PaulDecember
    Say you have a SQL Server 2008 database. You build a SOAP web service. You then deploy or publish this using Visual Studio 2010 in one website. Now, using the same database, you build a REST web service, in a different solution. You deploy this on another website. Can you consume the endpoints and/or .svc file of both the SOAP and REST web services, though they reference the same SQL Server 2008 database? I don't see why not, but before I go down this path and spend days I'd like to make sure. Also if there's a performance hit to the database if it is running both SOAP and REST at the same time--again, I don't see why it would matter, but I must make sure. Thanks.

    Read the article

  • I spy a Live Framework portal

    - by jamiet
    Those that have followed my blogs for a while may know that I have a slightly banal interest in Windows Live and, more specifically, the Live Services developer platform'; if that doesn’t sound interesting to you then stop reading now. My interest mainly stems from the Live Mesh technology that was announced a couple of years ago and the data synchronisation platform API that underpins it; that platform is called the Live Framework or LiveFX for short. At the Professional Developer’s Conference (PDC) 2008 Microsoft made LiveFX available to the public as a Tech Preview and I spent some time learning to use it and also built a few test apps on it too. In August 2009 an announcement came that that tech preview was getting shut down: "At the Professional Developer Conference 2008, we gave the developer community access to the technical preview of the Live Framework. The Live Framework is core to our vision of providing you with a consistent programming interface. Now we are working to integrate existing services, controls and the Live Framework into the next release of Windows Live. Your feedback continues to help us build the best possible offerings for Windows Live users, for you and for your customers. " Since then news on LiveFX has disappeared save for a throwaway session at PDC09 and I was hoping that news was going to appear at this week’s MIX conference but nothing was forthcoming. Instead though today I stumbled upon an unannounced portal for future LiveFX applications on Microsoft’s Azure portal at http://live.azure.com. Check it out: I consider this to be very good news. This Azure portal was built after the LiveFX tech preview was decommissioned so seeing Live Services existing so prominently alongside Microsoft’s other cloud efforts like Windows Azure and SQL Azure vindicates my early investment in the platform and gives me hope that we’re going to see something get released very very soon. I believe that the potential uses for this platform are extremely compelling and I’m looking forward to trying some out in the near future. I am also expecting LiveFX to have a heavy dependency on the OData protocol that I talked about yesterday in my post OData.org updated - gives clues about future sql azure enhancements so you can tell where my interest in that stems from. In case you’re wondering the projects that you see listed above (Basic List Sample, JT-proj etc…) are projects that I built on the old Tech Preview platform so clearly that stuff has not gone for good which is also good news; not just because it means I’ll have access to the code I wrote before but I also assume it means that LiveFX won’t have changed much since its tech preview incarnation. I know there are other LiveFX buffs out there and hopefully this news reaches some of them. If you are one of them the please put a comment below and let me know your thoughts! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • Week in Geek: USDA Chooses Microsoft for Cloud Services Edition

    - by Asian Angel
    This week we learned how to create geeky LED holiday lights with old bottles, dig deeper in Windows Defrag via the command prompt, use Google Chrome’s drag/drop feature to upload files easier, find great gift recommendations by looking through the How-To Geek holiday gift guide, and have fun adding Merry Christmas fonts to our computers. Photo by ntr23. Random Geek Links It has been a busy week, so we have extra news link goodness with information that is good for you to know. USDA making the move to Microsoft The U.S. Department of Agriculture has announced that it has chosen Microsoft to host things like e-mail, instant messaging, and collaboration through the software giant’s Business Productivity Online Suite. Google says it was cut off from USDA project bid Google is claiming that it was not given a chance to bid on a cloud-computing project for the U.S. Department of Agriculture, for which the contract was awarded to rival Microsoft. Apache is being forced into a Java Fork When Oracle rolled over Apache and Google’s objections to its Java plans in December, the scene was set for Apache to leave and, eventually, force a Java code fork. Tumblr explains daylong outage After experiencing an outage that started on Sunday afternoon and stretched through most of the day yesterday, Tumblr has explained what happened. Google demos Chrome OS, launches pilot program During a press briefing this week in San Francisco, Google launched the Chrome application store and demonstrated Chrome OS, its browser-centric netbook operating system. Don’t expect Spotify in U.S. this holiday season As of last week, Spotify had yet to sign a single licensing deal with a major label, after spending more than a year negotiating, multiple music sources told CNET. December 2010 Patch Tuesday will come with most bulletins ever According to the Microsoft Security Response Center, Microsoft will issue 17 Security Bulletins addressing 40 vulnerabilities on Tuesday, December 14. It will also host a webcast to address customer questions the following day. Hacker plants back door in Symbian firmware Indian hacker Atul Alex has had a look at the firmware for Symbian S60 smartphones and come up with a back door for it. PC quarantines raise tough complexities The concept of quarantining PCs to prevent widespread infection is “interesting, but difficult to implement, with far too many problems”, said security experts. Symantec: DDoS attacks hard to defend It has surfaced that the distributed denial of service (DDoS) attacks on Visa and MasterCard Web sites on Wednesday were carried out by a toolkit known as low orbit ion cannon (LOIC). Web Sockets and the risks of unfinished standards Enthusiasm for a promising new standard called Web Sockets has quickly cooled in some quarters as a potential security problem led some browser makers to hastily postpone support. Internet Explorer 9 to get tracking protection Microsoft is making changes to Internet Explorer 9’s security features that will better enable users to keep sites from tracking their activity across browsing sessions. NASA sold PCs with sensitive data NASA failed to remove sensitive data from computers that it sold, according to an audit report released this week. Cybercrooks create fake Amazon receipts The bad guys have created yet another online scam, this one involving fake Amazon receipts. World of Warcraft character move fees waived Until December 22, Blizzard will allow free realm transfers from 25 highly populated servers to alleviate log-in queues or performance issues. (The free transfers are one-way and one-time only.) SpaceX Dragon reaches orbit atop a Falcon with a fiery tail The Space Exploration Technologies corporation has become the first nongovernmental entity to put a vehicle into low Earth orbit. Geek Video of the Week If birds have wings, then why are the Angry Birds using slingshots? Photo by Dorkly Bits. Wait… Birds have Wings, Why are the Angry Ones Using Slingshots? Sysadmin Geek Tips How To Setup Email Alerts on Linux Using Gmail or SMTP Linux machines may require administrative intervention in countless ways, but without manually logging into them how would you know about it? Here’s how to setup emails to get notified when your machines want some tender love and attention. Random TinyHacker Links Red Panda Webcam Support Firefox and the Knoxville Zoo’s Red Panda program. Christmas Icons (Icons we like) Superb set of holiday icons by lgp85 at deviantArt. Download the .zip and use as .png or convert to .ico at Convertico.com or with tiny app Imagicon. Super User Questions Enjoy reading the great answers to this week’s popular questions from Super User Useful USB boot disks? DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder? What are other ways to backup my files if I do not have an external drive? Anti virus what is the difference between these all? How can I block all Facebook elements/content? How-To Geek Weekly Article Recap Have you had a busy week between work and preparing for the holidays? Get caught up on your HTG reading with our hottest articles of the week. 20 Windows Keyboard Shortcuts You Might Not Know The 50 Best Registry Hacks that Make Windows Better LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology HTG Explains: Which Linux File System Should You Choose? How to Use and Customize Google Chrome Web Apps One Year Ago on How-To Geek This week’s batch of retro geeky goodness is all about customizing Windows 7. ClassicShell Adds Classic Start Menu and Explorer Features to Windows 7 Get an Aero-Styled Classic Start Menu in Windows 7 Customize the Windows 7 Logon Screen Get the Classic Style Network Activity Indicator Back in Windows 7 How To Enable Check Boxes for Items In Windows 7 The Geek Note We would like you to join us in welcoming Jason Fitzpatrick to the writing staff here at How-To Geek. He started with us this past week, so take some time to read through his articles about the Wii, Kindle, & PlayStation 2 Peripherals and leave a friendly comment to say “Hi”! Got a great tip to share? Make sure to send it in to us at [email protected]. Photo by real00. Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Settle into Orbit with the Voyage Theme for Chrome and Iron Awesome Safari Compass Icons Set Escape from the Exploding Planet Wallpaper Move Your Tumblr Blog to WordPress Pytask is an Easy to Use To-Do List Manager for Your Ubuntu System Snowy Christmas House Personas Theme for Firefox

    Read the article

  • Cannot bind OSX to AD

    - by erotsppa
    I'm trying to get an mac mini running snow leopard server to join a windows domain here. The windows domain server is running Windows server 2008. When I go to "Accounts" in my System Preferences, and lick on "Join", I get this error: "Unable to add server. Node name wasn't found. (2000)" In my console messages I find this: 10-04-06 11:42:25 AM System Preferences1452 -[ODCAddServerSheetController handleOtherActionError: gotError: Error Domain=com.apple.OpenDirectory Code=2000 UserInfo=0x2004f2f80 "Custom call 82 to Active Directory failed.", Node name wasn't found. I specified a FQDN for the domain server, so I am totally confused as to why it would list "domain = com.apple...." in that error. I've tried firing up the Directory Utility and trying to join a domain via the Active Directory option there. Again I fill in the FQDN, and the proper administrator/password acount info. Now I get a different error: "Invalid Domain An invalid Domain and Forest combination was specified. You should enter a fully qualified DNS name for the domain and forest (e.g., ads.company.com)." If anyone has any pointers or suggestions this would be appreciated.

    Read the article

  • ssl_error_handshake_failure_alert with Commercial CA-based client certificate

    - by Bryan
    Attempting to implement client authentication with an SSL cert. http://www.modssl.org/docs/2.8/ssl_howto.html#auth-selective Receive the following errors. Apache: Re-negotiation handshake failed: Not accepted by client!? Firefox: ssl_error_handshake_failure_alert I assume it is a configuration error, but have not been able to locate it. Additional info: Commercial CA server cert servers secure works without problem in Apache 2.2 & Passenger. Only client authentication related directives do not work.

    Read the article

  • Uploading documents to WSS (Windows Sharepoint Services) using SSIS

    - by Randy Aldrich Paulo
    Recently I was tasked to create an SSIS application that will query a database, split the results with certain criteria and create CSV file for every result and upload the file to a Sharepoint Document Library site. I've search the web and compiled the steps I've taken to build the solution. Summary: A) Create a proxy class of WSS Copy.asmx. B) Create a wrapper class for the proxy class and add a mechanism to check if the file is existing and delete method. C) Create an SSIS and call the wrapper class to transfer the files.   A) Creating Proxy Class 1) Go to Visual Studio Command Prompt type wsdl http://[sharepoint site]/_vti_bin/Copy.asmx this will generate the proxy class (Copy.cs) that will be added to the solution. 2) Add Copy.cs to solution and create another constructor for Copy() that will accept additional parameters url, userName, password and domain.   public Copy(string url, string userName, string password, string domain) { this.Url = url; this.Credentials = new System.Net.NetworkCredential(userName, password, domain); } 3) Add a namespace.     B) Wrapper Class Create a C# new library that references the Proxy Class.         C) Create SSIS SSIS solution is composed of:   1) Execute SQL Task, returns a single column rows containing the criteria. 2) Foreach Loop Container - loops per result from query (SQL Task) and creates a CSV file on a certain folder. 3) Script Task - calls the wrapper class to upload CSV files located on a certain folder to targer WSS Document Library Note: I've created another overload of CopyFiles that accepts a Directory Info instead of file location that loops thru the contents of the folder. Designer View Variable View

    Read the article

  • NPS EAP authentication failing after Windows Update

    - by sqlreader
    I have a Windows 2008 Std server running NPS. After applying the latest round of updates (including Root Certificates for April 2012 KB931125 (See:http://support.microsoft.com/kb/933430/)), EAP authentication is failing due to being malformed. Sample error (Security/Event ID 6273), truncated for brevity: Authentication Details: Proxy Policy Name: Use Windows authentication for all users Network Policy Name: Wireless Access Authentication Provider: Windows Authentication Server: nps-host.corp.contoso.com Authentication Type: PEAP EAP Type: - Account Session Identifier: - Reason Code: 266 Reason: The message received was unexpected or badly formatted. The NPS policy (Wireless Access) is configured accordingly (for Constraints/Authentication methods) EAP Types: Microsoft: Protected EAP (PEAP) - with a valid certificate from ADCS Microsoft: Secured password (EAP-MSCHAP v2) Less secure authentication methods: Microsoft Encrypted Authentication version 2 (MS-CHAP-v2) User can change password after it has expired Microsoft Encrypted Authentication (MS-CHAP) User can change password after it has expired We've tested a different RADIUS server without the aforementioned patch, and removed EAP as an authentication type and experienced success. Has anyone else experienced this issue?

    Read the article

  • Introducing SQL Server 2008 and 2008 R2 Integration Services

    The latest release of SSIS strengthens its position as one of the primary foundations of Business Intelligence, delivering a powerful framework for solutions that combine data from disparate sources, facilitating its analysis and reporting. Join Marcin Policht as he reviews its general characteristics.

    Read the article

  • export shared services from MOSS

    - by vittocia
    Hello, using the stsadm command I have been able to export a MOSS website and restore it on a different server which works fine. I tried the same for the shared services, it gave no errors, but it does not have all the import connections when I check around. Is there a better way to export and restore shared services, or a way to synch the import connections and user list?

    Read the article

  • Can Windows log CryptoAPI CRL timouts?

    - by makerofthings7
    We have several .NET applications that occasionally "act slow" with no CPU or disk access. I suspect that they are hung up on authentication when trying to validate the certificate, since the timeout is almost 20 seconds. As per this MSFT article Most applications do not specify to CryptoAPI to use a cumulative time-out. If the cumulative time-out option is not enabled, CryptoAPI uses the CryptoAPI default setting which is a time-out of 15 seconds per URL. If the cumulative time-out option specified by the application, then CryptoAPI will use a default setting of 20 seconds as the cumulative timeout. The first URL receives a maximum timeout of 10 seconds. Each subsequent URL timeout is half of the remaining balance in the cumulative timeout value. Since this is a service, how can I detect and log CryptoAPI hangs for applications I have sourcecode to, and also 3rd party

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >