Search Results

Search found 36032 results on 1442 pages for 'oracle enterprise service automation'.

Page 322/1442 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Cumulative Update #1 for SQL Server 2005 SP4

    - by AaronBertrand
    Well, much quicker than I would have suspected, the SQL Server Release Services team has incorporated all of the fixes in 2005 SP3's CU #12 into the first CU for SP4. Thanks to Chris Wood for the heads up. You can get the new Cumulative Update here: KB #2464079 : Cumulative update package 1 for SQL Server 2005 Service Pack 4 The nice round number of build 5000 didn't last long either; this CU will update you from 9.00.5000 to 9.00.5254....(read more)

    Read the article

  • Security Updates Available for SQL Server 2008, 2008 R2, 2012, 2014

    - by AaronBertrand
    If you are running 2008 SP3, 2008 R2 SP2, 2012 SP1 (SP2 is not affected, RTM is no longer supported), or 2014, you'll want to check out Security Bulletin MS14-044 for details on a denial of service / privilege escalation issue that has been patched: http://technet.microsoft.com/en-us/library/security/MS14-044 For SQL Server 2012 and SQL Server 2014, I've blogged about recent builds and recommendations here: http://blogs.sqlsentry.com/team-posts/latest-builds-sql-server-2012/ http://blogs.sqlsentry.com/team-posts/latest-builds-sql-server-2014...(read more)

    Read the article

  • Cumulative Update packages for SQL Server 2008 are available now: CU7 for SQL2008 SP2 and CU2 for SQL2008 SP3

    - by ssqa.net
    Another instalment of Cumulative Update package for SQL Server 2008 SP3 is available now, which is CU2 and the build number is known as 10.00.5768.00. As usual this CU2 for SQL2008 SP3 contains hotfixes for issues that were fixed after the release of SQL Server 2008 Service Pack 3 (SP3). KBA2633143 list the following article numbers about more information on the fixes: VSTS bug number KB article number Description 794387 2522893 (http://support.microsoft.com/kb/2522893/ ) FIX: A backup operation...(read more)

    Read the article

  • Stuck with Documentum Still? Do MORE with Oracle WebCenter!

    - by Michael Snow
    WEBCAST TODAY!! 03/22/12 Do you need to lower costs? Raise Productivity? Foster Innovation? Improve Online Engagement? But you’re still stuck with Documentum? Step away from the ledge – there is hope – let us help you. Top 4 Content Imperatives · Lower Costs - Reduce labor, maintenance fees, storage and electrical consumption · Raise Productivity - Automation and integration, communication, findability · Foster Innovation - Enable collaboration, expertise location · Improve Online Engagement – enable user-driven, dynamic marketing initiatives With the coming technology wave we see four content imperatives. Every organization has had to reduce costs, cost cutting has become a way of life. Everyone is working three jobs as positions are eliminated. And so we have to reduce labor, reduce maintenance, and reduce money we are wasting on things like storing content that is redundant or no longer useful. We also, to fill that gap, need to raise productivity. Knowledge workers represent the fastest growing segment of the workforce, accounting for 40%-75% of the employees at organizations in sectors like financial services, life sciences, healthcare and retail.  What’s more, their wages total 18 percent of the United States GDP. And so we can’t afford information systems that don’t let our top performers be the best they can be. We look to automate the content processes, provide ways to integrate that content into our processes, provide communication to make decisions, and to make content more findable so people can make the right decision and move the process forward. And really to get ourselves out of the current financial status, we can only cut costs so far. We have to innovate out of economic tough times – to find new products and new markets. And to enable the innovation process, we have to enable collaboration and expertise location. So much of innovation is about building on innovations that have come before. To solve problems, we have to be able to find what our organization has already created. We find that problems we need to solve have already been solved if we can find the right document, the right person. So we have to provide systems that enable us to stand on the shoulders of our organization’s accomplishments. Good content drives great marketing. Online engagement is growing as an absolute necessity for modern growing marketing organizations that require the business users be enabled for dynamic marketing content creation, updates and targeted content creation and management. Unfortunately – if you are currently stuck with Documentum, you are really lacking in your Web Experience Management capabilities. Documentum previously used FatWire for web publishing. Now FatWire is part of Oracle. Oracle provides powerful web engagement capabilities: Increase sales and loyalty by optimizing online engagement Create, manage and moderate contextually relevant, targeted and interactive online experiences Optimize customer engagement across, web, mobile and social channels Manage large scale multichannel global online presence with integration to enterprise applications Enable business users to control their content and make their own updates Publish content from native files – enable navigation of project documents, procedures, policy information Enable content display and updates from existing web applications – one click to drag and drop content management functionality So you get the ability to self-publish information and make it navigable, to move the process of publishing from IT to business users, and the ability to address a whole new area of user engagement with web experience management. So… if you are still stuck with Documentum and don’t know what to do – contact us – not only will Oracle help you step away from the ledge, but also with the MoveOff Documentum program, we are offering you a way – trade-in your Documentum licenses for a 100% credit on Oracle WebCenter. How’s that for a nice bonus? It’s time to stop maintaining Documentum, and to start innovating with Oracle WebCenter. Learn More Here! To learn more about what Oracle WebCenter can offer you today – join us for a webcast – your eyes will be opened to all that’s possible. Do More with WebCenter: Extend Beyond Content Management

    Read the article

  • cx_Oracle makes subprocess give OSError

    - by Shrikant Sharat
    I am trying to use the cx_Oracle module with python 2.6.6 on ubuntu Maverick, with Oracle 11gR2 Enterprise edition. I am able to connect to my oracle db just fine, but once I do that, the subprocess module does not work anymore. Here is an iPython session that reproduces the problem... In [1]: import subprocess as sp, cx_Oracle as dbh In [2]: sp.call(['whoami']) sharat Out[2]: 0 In [3]: con = dbh.connect('system', 'password') In [4]: con.close() In [5]: sp.call(['whomai']) --------------------------------------------------------------------------- OSError Traceback (most recent call last) /home/sharat/desk/calypso-launcher/<ipython console> in <module>() /usr/lib/python2.6/subprocess.pyc in call(*popenargs, **kwargs) 468 retcode = call(["ls", "-l"]) 469 """ --> 470 return Popen(*popenargs, **kwargs).wait() 471 472 /usr/lib/python2.6/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags) 621 p2cread, p2cwrite, 622 c2pread, c2pwrite, --> 623 errread, errwrite) 624 625 if mswindows: /usr/lib/python2.6/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 1134 1135 if data != "": -> 1136 _eintr_retry_call(os.waitpid, self.pid, 0) 1137 child_exception = pickle.loads(data) 1138 for fd in (p2cwrite, c2pread, errread): /usr/lib/python2.6/subprocess.pyc in _eintr_retry_call(func, *args) 453 while True: 454 try: --> 455 return func(*args) 456 except OSError, e: 457 if e.errno == errno.EINTR: OSError: [Errno 10] No child processes So, the call to sp.call works fine before connecting to oracle, but breaks after that. Even if I have closed the connection to the database. Looking around, I found http://bugs.python.org/issue1731717 as somewhat related to this issue, but I am not dealing with threads here. I don't know if cx_Oracle is. Moreover, the above issue mentions that adding a time.sleep(1) fixes it, but it didn't help me. Any help appreciated. Thanks.

    Read the article

  • Maintaining shared service in ASP.NET MVC Application

    - by kazimanzurrashid
    Depending on the application sometimes we have to maintain some shared service throughout our application. Let’s say you are developing a multi-blog supported blog engine where both the controller and view must know the currently visiting blog, it’s setting , user information and url generation service. In this post, I will show you how you can handle this kind of case in most convenient way. First, let see the most basic way, we can create our PostController in the following way: public class PostController : Controller { public PostController(dependencies...) { } public ActionResult Index(string blogName, int? page) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublished(blog.Id, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetPublishedCount(blog.Id); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new IndexViewModel(urlResolver, user, blog, posts, count, page)); } public ActionResult Archive(string blogName, int? page, ArchiveDate archiveDate) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindArchived(blog.Id, archiveDate, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetArchivedCount(blog.Id, archiveDate); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new ArchiveViewModel(urlResolver, user, blog, posts, count, page, achiveDate)); } public ActionResult Tag(string blogName, string tagSlug, int? page) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } TagInfo tag = tagService.FindBySlug(blog.Id, tagSlug); if (tag == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublishedByTag(blog.Id, tag.Id, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetPublishedCountByTag(tag.Id); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new TagViewModel(urlResolver, user, blog, posts, count, page, tag)); } } As you can see the above code heavily depends upon the current blog and the blog retrieval code is duplicated in all of the action methods, once the blog is retrieved the same blog is passed in the view model. Other than the blog the view also needs the current user and url resolver to render it properly. One way to remove the duplicate blog retrieval code is to create a custom model binder which converts the blog from a blog name and use the blog a parameter in the action methods instead of the string blog name, but it only helps the first half in the above scenario, the action methods still have to pass the blog, user and url resolver etc in the view model. Now lets try to improve the the above code, first lets create a new class which would contain the shared services, lets name it as BlogContext: public class BlogContext { public BlogInfo Blog { get; set; } public UserInfo User { get; set; } public IUrlResolver UrlResolver { get; set; } } Next, we will create an interface, IContextAwareService: public interface IContextAwareService { BlogContext Context { get; set; } } The idea is, whoever needs these shared services needs to implement this interface, in our case both the controller and the view model, now we will create an action filter which will be responsible for populating the context: public class PopulateBlogContextAttribute : FilterAttribute, IActionFilter { private static string blogNameRouteParameter = "blogName"; private readonly IBlogService blogService; private readonly IUserService userService; private readonly BlogContext context; public PopulateBlogContextAttribute(IBlogService blogService, IUserService userService, IUrlResolver urlResolver) { Invariant.IsNotNull(blogService, "blogService"); Invariant.IsNotNull(userService, "userService"); Invariant.IsNotNull(urlResolver, "urlResolver"); this.blogService = blogService; this.userService = userService; context = new BlogContext { UrlResolver = urlResolver }; } public static string BlogNameRouteParameter { [DebuggerStepThrough] get { return blogNameRouteParameter; } [DebuggerStepThrough] set { blogNameRouteParameter = value; } } public void OnActionExecuting(ActionExecutingContext filterContext) { string blogName = (string) filterContext.Controller.ValueProvider.GetValue(BlogNameRouteParameter).ConvertTo(typeof(string), Culture.Current); if (!string.IsNullOrWhiteSpace(blogName)) { context.Blog = blogService.FindByName(blogName); } if (context.Blog == null) { filterContext.Result = new NotFoundResult(); return; } if (filterContext.HttpContext.User.Identity.IsAuthenticated) { context.User = userService.FindByName(filterContext.HttpContext.User.Identity.Name); } IContextAwareService controller = filterContext.Controller as IContextAwareService; if (controller != null) { controller.Context = context; } } public void OnActionExecuted(ActionExecutedContext filterContext) { Invariant.IsNotNull(filterContext, "filterContext"); if ((filterContext.Exception == null) || filterContext.ExceptionHandled) { IContextAwareService model = filterContext.Controller.ViewData.Model as IContextAwareService; if (model != null) { model.Context = context; } } } } As you can see we are populating the context in the OnActionExecuting, which executes just before the controllers action methods executes, so by the time our action methods executes the context is already populated, next we are are assigning the same context in the view model in OnActionExecuted method which executes just after we set the  model and return the view in our action methods. Now, lets change the view models so that it implements this interface: public class IndexViewModel : IContextAwareService { // More Codes } public class ArchiveViewModel : IContextAwareService { // More Codes } public class TagViewModel : IContextAwareService { // More Codes } and the controller: public class PostController : Controller, IContextAwareService { public PostController(dependencies...) { } public BlogContext Context { get; set; } public ActionResult Index(int? page) { IEnumerable<PostInfo> posts = postService.FindPublished(Context.Blog.Id, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetPublishedCount(Context.Blog.Id); return View(new IndexViewModel(posts, count, page)); } public ActionResult Archive(int? page, ArchiveDate archiveDate) { IEnumerable<PostInfo> posts = postService.FindArchived(Context.Blog.Id, archiveDate, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetArchivedCount(Context.Blog.Id, archiveDate); return View(new ArchiveViewModel(posts, count, page, achiveDate)); } public ActionResult Tag(string blogName, string tagSlug, int? page) { TagInfo tag = tagService.FindBySlug(Context.Blog.Id, tagSlug); if (tag == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublishedByTag(Context.Blog.Id, tag.Id, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetPublishedCountByTag(tag.Id); return View(new TagViewModel(posts, count, page, tag)); } } Now, the last thing where we have to glue everything, I will be using the AspNetMvcExtensibility to register the action filter (as there is no better way to inject the dependencies in action filters). public class RegisterFilters : RegisterFiltersBase { private static readonly Type controllerType = typeof(Controller); private static readonly Type contextAwareType = typeof(IContextAwareService); protected override void Register(IFilterRegistry registry) { TypeCatalog controllers = new TypeCatalogBuilder() .Add(GetType().Assembly) .Include(type => controllerType.IsAssignableFrom(type) && contextAwareType.IsAssignableFrom(type)); registry.Register<PopulateBlogContextAttribute>(controllers); } } Thoughts and Comments?

    Read the article

  • Installing a DHCP Service On Win2k8 ( Windows Server 2008 )

    - by Akshay Deep Lamba
    Introduction Dynamic Host Configuration Protocol (DHCP) is a core infrastructure service on any network that provides IP addressing and DNS server information to PC clients and any other device. DHCP is used so that you do not have to statically assign IP addresses to every device on your network and manage the issues that static IP addressing can create. More and more, DHCP is being expanded to fit into new network services like the Windows Health Service and Network Access Protection (NAP). However, before you can use it for more advanced services, you need to first install it and configure the basics. Let’s learn how to do that. Installing Windows Server 2008 DHCP Server Installing Windows Server 2008 DCHP Server is easy. DHCP Server is now a “role” of Windows Server 2008 – not a windows component as it was in the past. To do this, you will need a Windows Server 2008 system already installed and configured with a static IP address. You will need to know your network’s IP address range, the range of IP addresses you will want to hand out to your PC clients, your DNS server IP addresses, and your default gateway. Additionally, you will want to have a plan for all subnets involved, what scopes you will want to define, and what exclusions you will want to create. To start the DHCP installation process, you can click Add Roles from the Initial Configuration Tasks window or from Server Manager à Roles à Add Roles. Figure 1: Adding a new Role in Windows Server 2008 When the Add Roles Wizard comes up, you can click Next on that screen. Next, select that you want to add the DHCP Server Role, and click Next. Figure 2: Selecting the DHCP Server Role If you do not have a static IP address assigned on your server, you will get a warning that you should not install DHCP with a dynamic IP address. At this point, you will begin being prompted for IP network information, scope information, and DNS information. If you only want to install DHCP server with no configured scopes or settings, you can just click Next through these questions and proceed with the installation. On the other hand, you can optionally configure your DHCP Server during this part of the installation. In my case, I chose to take this opportunity to configure some basic IP settings and configure my first DHCP Scope. I was shown my network connection binding and asked to verify it, like this: Figure 3: Network connection binding What the wizard is asking is, “what interface do you want to provide DHCP services on?” I took the default and clicked Next. Next, I entered my Parent Domain, Primary DNS Server, and Alternate DNS Server (as you see below) and clicked Next. Figure 4: Entering domain and DNS information I opted NOT to use WINS on my network and I clicked Next. Then, I was promoted to configure a DHCP scope for the new DHCP Server. I have opted to configure an IP address range of 192.168.1.50-100 to cover the 25+ PC Clients on my local network. To do this, I clicked Add to add a new scope. As you see below, I named the Scope WBC-Local, configured the starting and ending IP addresses of 192.168.1.50-192.168.1.100, subnet mask of 255.255.255.0, default gateway of 192.168.1.1, type of subnet (wired), and activated the scope. Figure 5: Adding a new DHCP Scope Back in the Add Scope screen, I clicked Next to add the new scope (once the DHCP Server is installed). I chose to Disable DHCPv6 stateless mode for this server and clicked Next. Then, I confirmed my DHCP Installation Selections (on the screen below) and clicked Install. Figure 6: Confirm Installation Selections After only a few seconds, the DHCP Server was installed and I saw the window, below: Figure 7: Windows Server 2008 DHCP Server Installation succeeded I clicked Close to close the installer window, then moved on to how to manage my new DHCP Server. How to Manage your new Windows Server 2008 DHCP Server Like the installation, managing Windows Server 2008 DHCP Server is also easy. Back in my Windows Server 2008 Server Manager, under Roles, I clicked on the new DHCP Server entry. Figure 8: DHCP Server management in Server Manager While I cannot manage the DHCP Server scopes and clients from here, what I can do is to manage what events, services, and resources are related to the DHCP Server installation. Thus, this is a good place to go to check the status of the DHCP Server and what events have happened around it. However, to really configure the DHCP Server and see what clients have obtained IP addresses, I need to go to the DHCP Server MMC. To do this, I went to Start à Administrative Tools à DHCP Server, like this: Figure 9: Starting the DHCP Server MMC When expanded out, the MMC offers a lot of features. Here is what it looks like: Figure 10: The Windows Server 2008 DHCP Server MMC The DHCP Server MMC offers IPv4 & IPv6 DHCP Server info including all scopes, pools, leases, reservations, scope options, and server options. If I go into the address pool and the scope options, I can see that the configuration we made when we installed the DHCP Server did, indeed, work. The scope IP address range is there, and so are the DNS Server & default gateway. Figure 11: DHCP Server Address Pool Figure 12: DHCP Server Scope Options So how do we know that this really works if we do not test it? The answer is that we do not. Now, let’s test to make sure it works. How do we test our Windows Server 2008 DHCP Server? To test this, I have a Windows Vista PC Client on the same network segment as the Windows Server 2008 DHCP server. To be safe, I have no other devices on this network segment. I did an IPCONFIG /RELEASE then an IPCONFIG /RENEW and verified that I received an IP address from the new DHCP server, as you can see below: Figure 13: Vista client received IP address from new DHCP Server Also, I went to my Windows 2008 Server and verified that the new Vista client was listed as a client on the DHCP server. This did indeed check out, as you can see below: Figure 14: Win 2008 DHCP Server has the Vista client listed under Address Leases With that, I knew that I had a working configuration and we are done!

    Read the article

  • Evaluating Solutions to Manage Product Compliance? Don't Wait Much Longer

    - by Kerrie Foy
    Depending on severity, product compliance issues can cause all sorts of problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision, or indecision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS) is a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment.  ROHS was originally adopted by the European Union in 2003 for implementation in 2006, and it has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey.  In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014.   Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives.  Additional evolving regulations are coming from governing bodies like the Food and Drug Administration (FDA) and the International Organization for Standardization (ISO). Corporate sustainability initiatives are also gaining urgency and influencing product design. In a survey of 405 corporations in the Global 500 by Carbon Disclosure Project, co-written by PwC (CDP Global 500 Climate Change Report 2012 entitled Business Resilience in an Uncertain, Resource-Constrained World), 48% of the respondents indicated they saw potential to create new products and business services as a response to climate change. Just 21% reported a dedicated budget for the research. However, the report goes on to explain that those few companies are winning over new customers and driving additional profits by exploiting their abilities to adapt to environmental needs. The article cites Dell as an example – Dell has invested in research to develop new products designed to reduce its customers’ emissions by more than 10 million metric tons of CO2e per year. This reduction in emissions should save Dell’s customers over $1billion per year as a result! Over time we expect to see many additional companies prove that eco-design provides marketplace benefits through differentiation and direct customer value. How do you meet compliance requirements and also successfully invest in eco-friendly designs? No doubt companies struggle to answer this question. After all, the journey to get there may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes per the rapidly evolving global and regional directives. There may be limited executive focus on the initiative, inability to quantify noncompliance, or not enough resources to justify investment. To make things even more difficult to address, compliance responsibility can be a passionate topic within an organization, making the prospect of change on an enterprise scale problematic and time-consuming. Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming to an organization. With all the overhead, certain markets or demographics become simply inaccessible. Therefore, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and even thrive in today’s highly regulated and transparent environment by implementing systematic approaches to product compliance that are more than functional bandages but revenue-generating engines. Consider partnering with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase. Agile PG&C makes it possible to efficiently manage compliance per corporate green initiatives as well as regional and global directives. Options are critical, but so is ease-of-use. Anyone who’s grappled with compliance policy knows legal interpretation plays a major role in determining how an organization responds to regulation. Agile PG&C gives you the freedom to configure product compliance per your needs, while maintaining rigorous control over the product record in an easy-to-use interface that facilitates adoption efforts. It allows you to assign regulations as specifications for a part or BOM roll-up. Each specification has a threshold value that alerts you to a non-compliance issue if the threshold value is exceeded. Set however many regulations as specifications you need to make sure a product can be sold in your target countries. Another option is to implement like one of our leading consumer electronics customers and define your own “catch-all” specification to ensure compliance in all markets. You can give your suppliers secure access to enter their component data or integrate a third party’s data. With Agile PG&C you are able to design compliance earlier into your products to reduce cost and improve quality downstream when stakes are higher. Agile PG&C is a comprehensive solution that makes product compliance more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise, systematic approach to product compliance is a competitive investment. From the start, Agile Product Governance & Compliance enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738. Many thanks to Shane Goodwin, Senior Manager, Oracle Agile PLM Product Management, for contributions to this article. 

    Read the article

  • Ajax call to wcf windows service over ssl (https)

    - by bpatrick100
    I have a windows service which exposes an endpoint over http. Again this is a windows service (not a web service hosted in iis). I then call methods from this endpoint, using javascript/ajax. Everything works perfectly, and this the code I'm using in my windows service to create the endpoint: //Create host object WebServiceHost webServiceHost = new WebServiceHost(svcHost.obj, new Uri("http://192.168.0.100:1213")); //Add Https Endpoint WebHttpBinding binding = new WebHttpBinding(); webServiceHost.AddServiceEndpoint(svcHost.serviceContract, binding, string.Empty); //Add MEX Behaivor and EndPoint ServiceMetadataBehavior metadataBehavior = new ServiceMetadataBehavior(); metadataBehavior.HttpGetEnabled = true; webServiceHost.Description.Behaviors.Add(metadataBehavior); webServiceHost.AddServiceEndpoint(ServiceMetadataBehavior.MexContractName, MetadataExchangeBindings.CreateMexHttpBinding(), "mex"); webServiceHost.Open(); Now, my goal is to get this same model working over SSL (https not http). So, I have followed the guidance of several msdn pages, like the following: http://msdn.microsoft.com/en-us/library/ms733791(VS.100).aspx I have used makecert.exe to create a test cert called "bpCertTest". I have then used netsh.exe to configure my port (1213) with the test cert I created, all with no problem. Then, I've modified the endpoint code in my windows service to be able to work over https as follows: //Create host object WebServiceHost webServiceHost = new WebServiceHost(svcHost.obj, new Uri("https://192.168.0.100:1213")); //Add Https Endpoint WebHttpBinding binding = new WebHttpBinding(); binding.Security.Mode = WebHttpSecurityMode.Transport; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate; webServiceHost.AddServiceEndpoint(svcHost.serviceContract, binding, string.Empty); webServiceHost.Credentials.ServiceCertificate.SetCertificate("CN=bpCertTest", StoreLocation.LocalMachine, StoreName.My); //Add MEX Behaivor and EndPoint ServiceMetadataBehavior metadataBehavior = new ServiceMetadataBehavior(); metadataBehavior.HttpsGetEnabled = true; webServiceHost.Description.Behaviors.Add(metadataBehavior); webServiceHost.AddServiceEndpoint(ServiceMetadataBehavior.MexContractName, MetadataExchangeBindings.CreateMexHttpsBinding(), "mex"); webServiceHost.Open(); The service creates the endpoint successfully, recognizes my cert in the SetCertificate() call, and the service starts up and running with success. Now, the problem is my javascript/ajax call cannot communicate with the service over https. I simply get some generic commication error (12031). So, as a test, I changed the port I was calling in the javascript to some other random port, and I get the same error - which tells me that I'm obviously not even reaching my service over https. I'm at a complete loss at this point, I feel like everything is in place, and I just can't see what the problem is. If anyone has experience in this scenario, please provide your insight and/or solution! Thanks!

    Read the article

  • When adding WCF service reference, configuration details are not added to web.config

    - by Mikey Cee
    Hi, I am trying to add a WCF service reference to my web application using VS2010. It seems to add OK, but the web.config is not updated, meaning I get a runtime exception: Could not find default endpoint element that references contract 'CoolService.CoolService' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this contract could be found in the client element. Obviously, because the service is not defined in my web.config. Steps to reproduce: Right click solution Add New Project ASP.NET Empty Web Application. Right click Service References in the new web app Add Service Reference. Enter address of my service and click Go. My service is visible in the left-hand Services section, and I can see all its operations. Type a namespace for my service. Click OK. The service reference is generated correctly, and I can open the Reference.cs file, and it all looks OK. Open the web.config file. It is still empty! <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <bindings /> <client /> </system.serviceModel> Why is this happening? It also happens with a console application, or any other project type I try. Any help? Here is the app.config from my WCF service: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service name="CoolSQL.Server.WCF.CoolService"> <endpoint address="" binding="webHttpBinding" contract="CoolSQL.Server.WCF.CoolService" behaviorConfiguration="SilverlightFaultBehavior"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8732/Design_Time_Addresses/CoolSQL.Server.WCF/CoolService/" /> </baseAddresses> </host> </service> </services> <behaviors> <endpointBehaviors> <behavior name="webBehavior"> <webHttp /> </behavior> <behavior name="SilverlightFaultBehavior"> <silverlightFaults /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <webHttpBinding> <binding name="DefaultBinding" bypassProxyOnLocal="true" useDefaultWebProxy="false" hostNameComparisonMode="WeakWildcard" sendTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:00:10" maxReceivedMessageSize="2147483647" transferMode="Streamed"> <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" /> </binding> </webHttpBinding> </bindings> <extensions> <behaviorExtensions> <add name="silverlightFaults" type="CoolSQL.Server.WCF.SilverlightFaultBehavior, CoolSQL.Server.WCF" /> </behaviorExtensions> </extensions> <diagnostics> <messageLogging logEntireMessage="true" logMalformedMessages="false" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="false" maxMessagesToLog="3000" maxSizeOfMessageToLog="2000" /> </diagnostics> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" /> </startup> <system.diagnostics> <sources> <source name="System.ServiceModel.MessageLogging" switchValue="Information, ActivityTracing"> <listeners> <add name="messages" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\messages.e2e" /> </listeners> </source> </sources> </system.diagnostics> </configuration>

    Read the article

  • Installing MonoDevelop on Suse Enterprise 10.0

    - by Robert Harvey
    I tried to install MonoDevelop on Suse 11.0 Enterprise, using the 1-click install on the MonoDevelop download page, but quickly wound up in a tangle of missing dependencies. I then tried using the Suse software repositories to get MonoDevelop, and waded through several of the dependencies for awhile trying to get the necessary packages to fulfill the dependencies, but some of the packages in the Suse repositories actually appear to be missing the needed RPM files. Are these repositories no longer being actively maintained? I am aware that there is a CD on the Mono site (called the Mono LiveCD) that appears to contain a complete installation of the development environment, as well as a DVD for OpenSuse 11.2 (on the OpenSuse site) that might actually have all of the Mono software already installed. But the target environment for the utility I am writing is Suse 11.0 Enterprise Server. Does that matter? What is the shortest distance between two points here?

    Read the article

  • SocketException preventing use of C# TCPListener in Windows Service

    - by JoeGeeky
    I have a Windows Service that does the following when started. When running via a Console application it works fine, but once I put in a Windows Service I get the below exception. Here is what I have tried so far: Disabled the firewall, also tried adding explicit exclusions for the exe, port, and protocol Checked CAS Policy Config, shows unrestricted rights Configured the Service to run as an Administrator Account, Local System, Local Service, and Network Service, each with the same result Tried different ports Also tried 127.0.0.1 just to see... same issue This is wrecking my head, so any help would be greatly appreciated: The Code: var _listener = new TcpListener(endpoint); //192.168.2.2:20000 _listener.Start(); The resulting Exception: Service cannot be started. System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.Bind(EndPoint localEP) at System.Net.Sockets.TcpListener.Start(Int32 backlog) at System.Net.Sockets.TcpListener.Start() at Server.RequestHandler.StartServicingRequests(IPEndPoint endpoint) at Server.Server.StartServer(String[] args) at Server.Server.OnStart(String[] args) at System.ServiceProcess.ServiceBase.ServiceQueuedMainCallback(Object state)

    Read the article

  • SQL Server Upgrade 'Developer > Enterprise'

    - by JD
    Hey guys, My company purchased Visual Studio Pro 2008 last year, which had a 'free' copy of SQL Server Developer, which I have been using for development. We are wanting to upgrade the copy of developer edition to enterprise (As we now want to use the server as a production server), and have purchased the licenses for this. Now... Morally we're in the clear... However does this comply with MS licensing T&C's? We have Developer installed how we want it, and don't really want to uninstall SQL Server Dev just to install SQL Server Ent. Is there a way to transfer the license key to our Enterprise key without having to reinstall? Thanks, JD

    Read the article

  • java - call web service operation - wrong return type

    - by user1639680
    i have a simple web service with one method that returns List<org.company.data.mp> i've created a simple web service client and specified a web service with wsdl. in netbeans i try to call a web service operation: right click, insert code, ... and i pick my web service operation. the code gets inserted but the method's return type is not List<org.company.data.mp> but it is List<org.company.server.mp>! i don't get it.. in the package "server" there is no class called mp! i check the implementation class of my web service - it says the return type is ...data.mp not ...server.mp

    Read the article

  • Ruby New dnssd (bonjour zeroconf) service not appearing while browsing

    - by Poul
    Here is my simple zeroconf (aka bonjour dnssd) browser. If I have other services running when I start the browser I can see it (the 'resolved to' line prints to the screen). However, if I start up another service while this browser is running it will not appear. It just waits at the top of the block so I would expect it to enter the block once a new service is registered. Any ideas? require 'rubygems' require 'dnssd' browser = DNSSD::Service.new browser.browse '_http._tcp.' do |reply| #<-- code seems to wait here for more services DNSSD.resolve reply do |r| puts "resolved to: http://#{r.target}:#{r.port}" end end #example service register_service = DNSSD::register( "My Service","_http._tcp", nil, my_port) do puts "* Registering the service *" end

    Read the article

  • referencing a WCF web service

    - by ErnieStings
    Our current project uses an asmx service. We want to keep this service for now, but would like to add an additional wcf service for ajax calls. I followed a procedure i found online to set up the service and it works fine with javascript in aspx files within that particular project but i'm unsure how to reference it in javascript files in a different project (in the same solution). If someone could point me in the right direction it would be much appreciated. Thanks, Shawn EDIT: i wish to make calls in javascript similar to the following: function Button1_onclick() { var service = new AjaxServices.TestService(); service.wcfTest(4, onSuccess, null, null); } function onSuccess(result){ document.getElementById("ajaxPlaceHolder").innerHTML = "<p>" + result + "</p>"; } // ]]> but i'm willing to explore the jQuery option as well.

    Read the article

  • WCF service with 2 Bindings and 2 Base Addresses

    - by Sean
    I have written a WCF service (I am a newb) that I want to provide 2 endpoints for (net.tcp & basicHttp) The problem comes when I try to configure the endpoints. If I configure them as seperate services, then my service names are the same which causes a problem. I have seen recomended creating shim classes (classA : MyService, and ClassB : MyService) but that seems smelly. <services> <service name="MyWcfService.MyService" behaviorConfiguration="MyWcfService.HttpBehavior"> <endpoint name="ApplicationHttp" address="Application" binding="basicHttpBinding" bindingConfiguration="HttpBinding" contract="MyWcfService.Interfaces.IMyService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8731/MyWcfService/" /> </baseAddresses> </host> </service> <service name="MyWcfService.MyService" behaviorConfiguration="MyWcfService.MyBehavior"> <endpoint name="Application" address="Application" binding="netTcpBinding" bindingConfiguration="SecuredByWindows" contract="EmsHistorianService.Interfaces.IApplicationHistorianService" /> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:49153/MyWcfService" /> </baseAddresses> </host> </service> </services> I have tried using a single service with the base address integrated into the address, but that gives me errors as well <services> <service name="MyWcfService.MyService" behaviorConfiguration="MyWcfService.HttpBehavior"> <endpoint name="ApplicationHttp" address="http://localhost:8731/MyWcfService/Application" binding="basicHttpBinding" bindingConfiguration="HttpBinding" contract="MyWcfService.Interfaces.IMyService" /> <endpoint address="http://localhost:8731/MyWcfService/mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <endpoint name="Application" address="net.tcp://localhost:49153/MyWcfService/Application" binding="netTcpBinding" bindingConfiguration="SecuredByWindows" contract="EmsHistorianService.Interfaces.IApplicationHistorianService" /> <endpoint address="net.tcp://localhost:49153/MyWcfService/mex" binding="mexTcpBinding" contract="IMetadataExchange" /> </service> </services> Any ideas?

    Read the article

  • MVC Client Validation from Service Layer

    - by GibboK
    I'm following this article http://www.asp.net/mvc/tutorials/older-versions/models-(data)/validating-with-a-service-layer-cs to include a Service Layer with Business Logic in my MVC Web Application. I'm able to pass messages from the Service Layer to the View Model in a Html.ValidationSummary using ModelState Class. I perform basic validation logic on the View Model (using DataAnnotation attributes) and I have ClientValidation enabled by default which displaying the error message on every single field of my form. The Business logic error message which come from the Service Layer are being displayed on Html.ValidationSummary only after Posting the form to the Server. After Validation from the Service Layer I would like highlight one or more fields and have the message from the Service Layer showing on these fields instead that the Html.ValidationSummary. Any idea how to do it? Thanks

    Read the article

  • WCF Service Library Reference in a Web Form (asp.net)

    - by Abu Hamzah
    i am not sure if this is the correct way of doing but i read that you not suppose to have a WCF Service Library reference in your web form project rather you add endpoints to your web.config, is that true? here is what i have done: 1) create a WCF service library project 2) create a simple service called "MyService.svc" WebForm: 1) Create a web project 2) create a WCF Service and in it i have this code <%@ ServiceHost Language="C#" Service="WCFJQuery.ContactBLL.Implementation.ContactUs" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %> 3) right click on the web proejct and "Add Reference" and add teh MyService.dll reference from WCF service library project. is this something how you suppose to do?

    Read the article

  • What MS technology to use for HTTP service returning XML?

    - by Borek
    I need to create a service that: accepts HTTP requests (with query string or HTTP POST parameters) does some processing on the requests (checking if the request is valid, authentication etc.) reads data from a custom store (another HTTP call in our case) returns the result as custom XML (defined with XSD) I'm trying to think of various MS technologies that could help me and how good they would be for this scenario (pretty standard one I guess). The tasks above are relatively separate, this is what comes to mind: HTTP front-end: ASP.NET Web Forms ASP.NET MVC (seems more appropriate here as I won't need server controls, view state etc.) WCF? Don't know much about it or how well it would suit my task. Custom logic on the server: this will probably be a generic C# code in all cases (sometimes "plugged into" or called from MVC controllers or some equivalent place in other technologies) Reading data from internal data stores: As said, this is another HTTP server in our case. Options that come to mind: Just read the data using something like WebClient (Just theoretically) implement a LINQ provider (Just even more theoretically) implement an EF provider Output the data as custom XML: Linq2XML Serialization? Is it flexible enough? Does WCF provide some tools for this? Some "OXM" - Object/XML mapper if there is something like that for .NET I may be wrong in many of my assumptions, this is just a quick list that comes to mind after a quick research. Some general notes / questions: Testing is important Solution with a clear domain model would be much preferred over the one without Can Entity Framework actually help somewhere in my scenario? If so, where and how? Would WCF be an appropriate technology for this? I don't know much about it.

    Read the article

  • Deploying service from development server to iis7 server

    - by MindWorX
    I have a service which works perfectly on the local development server, but once moved to the remote iis7 server, it fails. I've been browsing the service in a browser manually. Here's the steps I've been taking: Open up Service.svc Open up Service.svc?wsdl Open up Service.svc?wsdl0 Open up Service.svc?xsd=xsd0 Step 4. is where it fails. If i browse on the development server it works. If i browse on the iis7 server, I get a connection reset error. Any help appreciated.

    Read the article

  • jQuery AJAX Web service works only locally

    - by Greg
    Hi, I have a simple ASP.NET Web Service [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.Web.Script.Services.ScriptService] public class Service : System.Web.Services.WebService { public Service () { } [WebMethod] public string SetName(string name) { return "hello my dear friend " + name; } } For this Web Service I created Virtual Directory, so I can receive the access by taping http://localhost:89/Service.asmx. I try to call it via simple html page with jQuery. For this purpose I use function CallWS() { $.ajax({ type: "POST", data: "{'name':'Pumba'}", dataType: "json", url: "http://localhost:89/Service.asmx/SetName", contentType: "application/json; charset=utf-8", success: function (msg) { $('#DIVid').html(msg.d); }, error: function (e) { $('#DIVid').html("Error"); } }); The most interesting fact: If I create the html page in the project with my WebService and change url to Service.asmx/SetName everything works excellent. But if I try to call this webservice remotely - success function works but msg is null. After that I tried to call this service even via SOAP. It is the the same - locally it works excellent, but remotely - not at all. var ServiceUrl = 'http://localhost:89/Service.asmx?op=SetName'; function beginSetName(Name) { var soapMessage = '<?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <SetName xmlns="http://tempuri.org/"> <name>' + Name + '</name> </SetName> </soap:Body> </soap:Envelope>'; $.ajax({ url: ServiceUrl, type: "POST", dataType: "xml", data: soapMessage, complete: endSetName, contentType: "text/xml; charset=\"utf-8\"" }); return false; } function endSetName(xmlHttpRequest, status) { $(xmlHttpRequest.responseXML) .find('SetNameResult') .each(function () { var name = $(this).text(); alert(name); }); } In this case status has value "parseerror". Could you please help me to resolve this problem? What should I do to call another WebService remotely by url via jQuery. Thank you in advance, Greg

    Read the article

  • pro*C in oracle XE

    - by srandpersonia
    I downloaded the free express edition of oracle, Oracle XE. I couldn't find the pro*c compiler in this edition. I read somewhere that oracle 9i client has pro*C, So I presumed that oracle client for 10g XE should have it too and downloaded it. But to my disappointment, I can't find it there too. :(. Is there a way to download the older oracle 9i and use it connect to 10g XE without any compatibility problems?. Or is it possible to download the pro*C compiler alone?. I don't want to download the standard editions as they are too large(2 GB). Thanks.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >