Search Results

Search found 13950 results on 558 pages for 'durable services'.

Page 185/558 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • The new Auto Scaling Service in Windows Azure

    - by shiju
    One of the key features of the Cloud is the on-demand scalability, which lets the cloud application developers to scale up or scale down the number of compute resources hosted on the Cloud. Auto Scaling provides the capability to dynamically scale up and scale down your compute resources based on user-defined policies, Key Performance Indicators (KPI), health status checks, and schedules, without any manual intervention. Auto Scaling is an important feature to consider when designing and architecting cloud based solutions, which can unleash the real power of Cloud to the apps for providing truly on-demand scalability and can also guard the organizational budget for cloud based application deployment. In the past, you have had to leverage the the Microsoft Enterprise Library Autoscaling Application Block (WASABi) or a services like  MetricsHub for implementing Automatic Scaling for your cloud apps hosted on the Windows Azure. The WASABi required to host your auto scaling block in a Windows Azure Worker Role for effectively implementing the auto scaling behaviour to your Windows Azure apps. The newly announced Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. Unlike WASABi hosted on a Worker Role, you don’t need to host any monitoring service for using the new Auto Scaling service and the Auto Scaling service will be available to individual Windows Azure Compute Services as part of the Scaling. Configure Auto Scaling for a Windows Azure Cloud Service Currently the Auto Scaling service supports Cloud Services, Web Sites and Virtual Machine. In this demo, I will be used a Cloud Services app with a Web Role and a Worker Role. To enable the Auto Scaling, select t your Windows Azure app in the Windows Azure management portal, and choose “SCLALE” tab. The Scale tab will show the all information regards with Auto Scaling. The below image shows that we have currently disabled the AutoScale service. To enable Auto Scaling, you need to choose either CPU or QUEUE. The QUEUE option is not available for Web Sites. The image below demonstrates how to configure Auto Scaling for a Web Role based on the utilization of CPU. We have configured the web role app for running with 1 to 5 Virtual Machine instances based on the CPU utilization with a range of 50 to 80%. If the aggregate utilization is becoming above above 80%, it will scale up instances and it will scale down instances when utilization is becoming below 50%. The image below demonstrates how to configure Auto Scaling for a Worker Role app based on the messages added into the Windows Azure storage Queue. We configured the worker role app for running with 1 to 3 Virtual Machine instances based on the Queue messages added into the Windows Azure storage Queue. Here we have specified the number of messages target per machine is 2000. The image below shows the summary of the Auto Scaling for the Cloud Service after configuring auto scaling service. Summary Auto Scaling is an extremely important behaviour of the Cloud applications for providing on-demand scalability without any manual intervention. Windows Azure provides greater support for enabling Auto Scaling for the apps deployed on the Windows Azure cloud platform. The new Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. In the new Auto Scaling service, you don’t have to host any monitor service like you have had in WASABi block. The Auto Scaling service is an excellent alternative to the manually hosting WASABi block in a Worker Role app.

    Read the article

  • Why is there no service-oriented language?

    - by Wolfgang
    Edit: To avoid further confusion: I am not talking about web services and such. I am talking about structuring applications internally, it's not about how computers communicate. It's about programming languages, compilers and how the imperative programming paradigm is extended. Original: In the imperative programming field, we saw two paradigms in the past 20 years (or more): object-oriented (OO), and service-oriented (SO) aka. component-based (CB). Both paradigms extend the imperative programming paradigm by introducing their own notion of modules. OO calls them objects (and classes) and lets them encapsulates both data (fields) and procedures (methods) together. SO, in contrast, separates data (records, beans, ...) from code (components, services). However, only OO has programming languages which natively support its paradigm: Smalltalk, C++, Java and all other JVM-compatibles, C# and all other .NET-compatibles, Python etc. SO has no such native language. It only comes into existence on top of procedural languages or OO languages: COM/DCOM (binary, C, C++), CORBA, EJB, Spring, Guice (all Java), ... These SO frameworks clearly suffer from the missing native language support of their concepts. They start using OO classes to represent services and records. This leads to designs where there is a clear distinction between classes that have methods only (services) and those that have fields only (records). Inheritance between services or records is then simulated by inheritance of classes. Technically, its not kept so strictly but in general programmers are adviced to make classes to play only one of the two roles. They use additional, external languages to represent the missing parts: IDL's, XML configurations, Annotations in Java code, or even embedded DSL like in Guice. This is especially needed, but not limited to, since the composition of services is not part of the service code itself. In OO, objects create other objects so there is no need for such facilities but for SO there is because services don't instantiate or configure other services. They establish an inner-platform effect on top of OO (early EJB, CORBA) where the programmer has to write all the code that is needed to "drive" SO. Classes represent only a part of the nature of a service and lots of classes have to be written to form a service together. All that boiler plate is necessary because there is no SO compiler which would do it for the programmer. This is just like some people did it in C for OO when there was no C++. You just pass the record which holds the data of the object as a first parameter to the procedure which is the method. In a OO language this parameter is implicit and the compiler produces all the code that we need for virtual functions etc. For SO, this is clearly missing. Especially the newer frameworks extensively use AOP or introspection to add the missing parts to a OO language. This doesn't bring the necessary language expressiveness but avoids the boiler platform code described in the previous point. Some frameworks use code generation to produce the boiler plate code. Configuration files in XML or annotations in OO code is the source of information for this. Not all of the phenomena that I mentioned above can be attributed to SO but I hope it clearly shows that there is a need for a SO language. Since this paradigm is so popular: why isn't there one? Or maybe there are some academic ones but at least the industry doesn't use one.

    Read the article

  • Thinking Local, Regional and Global

    - by Apeksha Singh-Oracle
    The FIFA World Cup tournament is the biggest single-sport competition: it’s watched by about 1 billion people around the world. Every four years each national team’s manager is challenged to pull together a group players who ply their trade across the globe. For example, of the 23 members of Brazil’s national team, only four actually play for Brazilian teams, and the rest play in England, France, Germany, Spain, Italy and Ukraine. Each country’s national league, each team and each coach has a unique style. Getting all these “localized” players to work together successfully as one unit is no easy feat. In addition to $35 million in prize money, much is at stake – not least national pride and global bragging rights until the next World Cup in four years time. Achieving economic integration in the ASEAN region by 2015 is a bit like trying to create the next World Cup champion by 2018. The team comprises Brunei Darussalam, Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam. All have different languages, currencies, cultures and customs, rules and regulations. But if they can pull together as one unit, the opportunity is not only great for business and the economy, but it’s also a source of regional pride. BCG expects by 2020 the number of firms headquartered in Asia with revenue exceeding $1 billion will double to more than 5,000. Their trade in the region and with the world is forecast to increase to 37% of an estimated $37 trillion of global commerce by 2020 from 30% in 2010. Banks offering transactional banking services to the emerging market place need to prepare to repond to customer needs across the spectrum – MSMEs, SMEs, corporates and multi national corporations. Customers want innovative, differentiated, value added products and services that provide: • Pan regional operational independence while enabling single source of truth at a regional level • Regional connectivity and Cash & Liquidity  optimization • Enabling Consistent experience for their customers  by offering standardized products & services across all ASEAN countries • Multi-channel & self service capabilities / access to real-time information on liquidity and cash flows • Convergence of cash management with supply chain and trade finance While enabling the above to meet customer demands, the need for a comprehensive and robust credit management solution for effective regional banking operations is a must to manage risk. According to BCG, Asia-Pacific wholesale transaction-banking revenues are expected to triple to $139 billion by 2022 from $46 billion in 2012. To take advantage of the trend, banks will have to manage and maximize their own growth opportunities, compete on a broader scale, manage the complexity within the region and increase efficiency. They’ll also have to choose the right operating model and regional IT platform to offer: • Account Services • Cash & Liquidity Management • Trade Services & Supply Chain Financing • Payments • Securities services • Credit and Lending • Treasury services The core platform should be able to balance global needs and local nuances. Certain functions need to be performed at a regional level, while others need to be performed on a country level. Financial reporting and regulatory compliance are a case in point. The ASEAN Economic Community is in the final lap of its preparations for the ultimate challenge: becoming a formidable team in the global league. Meanwhile, transaction banks are designing their own hat trick: implementing a world-class IT platform, positioning themselves to repond to customer needs and establishing a foundation for revenue generation for years to come. Anand Ramachandran Senior Director, Global Banking Solutions Practice Oracle Financial Services Global Business Unit

    Read the article

  • Why does Keychain Services return the wrong keychain content?

    - by Graham Lee
    I've been trying to use persistent keychain references in an iPhone application. I found that if I created two different keychain items, I would get a different persistent reference each time (they look like 'genp.......1', 'genp.......2', …). However, attempts to look up the items by persistent reference always returned the content of the first item. Why should this be? I confirmed that my keychain-saving code was definitely creating new items in each case (rather than updating existing items), and was not getting any errors. And as I say, Keychain Services is giving a different persistent reference for each item. I've managed to solve my immediate problem by searching for keychain items by attribute rather than persistent references, but it would be easier to use persistent references so I'd appreciate solving this problem. Here's my code: - (NSString *)keychainItemWithName: (NSString *)name { NSString *path = [GLApplicationSupportFolder() stringByAppendingPathComponent: name]; NSData *persistentRef = [NSData dataWithContentsOfFile: path]; if (!persistentRef) { NSLog(@"no persistent reference for name: %@", name); return nil; } NSArray *refs = [NSArray arrayWithObject: persistentRef]; //get the data CFMutableDictionaryRef params = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionaryAddValue(params, kSecMatchItemList, refs); CFDictionaryAddValue(params, kSecClass, kSecClassGenericPassword); CFDictionaryAddValue(params, kSecReturnData, kCFBooleanTrue); CFDataRef item = NULL; OSStatus result = SecItemCopyMatching(params, (CFTypeRef *)&item); CFRelease(params); if (result != errSecSuccess) { NSLog(@"error %d retrieving keychain reference for name: %@", result, name); return nil; } NSString *token = [[NSString alloc] initWithData: (NSData *)item encoding: NSUTF8StringEncoding]; CFRelease(item); return [token autorelease]; } - (void)setKeychainItem: (NSString *)newToken forName: (NSString *)name { NSData *tokenData = [newToken dataUsingEncoding: NSUTF8StringEncoding]; //firstly, find out whether the item already exists NSDictionary *searchAttributes = [NSDictionary dictionaryWithObjectsAndKeys: name, kSecAttrAccount, kCFBooleanTrue, kSecReturnAttributes, nil]; NSDictionary *foundAttrs = nil; OSStatus searchResult = SecItemCopyMatching((CFDictionaryRef)searchAttributes, (CFTypeRef *)&foundAttrs); if (noErr == searchResult) { NSMutableDictionary *toStore = [foundAttrs mutableCopy]; [toStore setObject: tokenData forKey: (id)kSecValueData]; OSStatus result = SecItemUpdate((CFDictionaryRef)foundAttrs, (CFDictionaryRef)toStore); if (result != errSecSuccess) { NSLog(@"error %d updating keychain", result); } [toStore release]; return; } //need to create the item. CFMutableDictionaryRef params = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionaryAddValue(params, kSecClass, kSecClassGenericPassword); CFDictionaryAddValue(params, kSecAttrAccount, name); CFDictionaryAddValue(params, kSecReturnPersistentRef, kCFBooleanTrue); CFDictionaryAddValue(params, kSecValueData, tokenData); NSData *persistentRef = nil; OSStatus result = SecItemAdd(params, (CFTypeRef *)&persistentRef); CFRelease(params); if (result != errSecSuccess) { NSLog(@"error %d from keychain services", result); return; } NSString *path = [GLApplicationSupportFolder() stringByAppendingPathComponent: name]; [persistentRef writeToFile: path atomically: NO]; [persistentRef release]; }

    Read the article

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

  • Siemens AG, Sector Healthcare, Increases Transparency and Improves Customer Loyalty with Web Portal Solution

    - by Kellsey Ruppel
    Siemens AG, Sector Healthcare, Increases Transparency and Improves Customer Loyalty with Web Portal Solution CUSTOMER AND PARTNER INFORMATION Customer Name – Siemens AG, Sector Healthcare Customer Revenue – 73,515 Billion Euro (2011, Siemens AG total) Customer Quote – “The realization of our complex requirements within a very short amount of time was enabled through the competent implementation partner Sapient, who fully used the  very broad scope of standard functionality provided in the Oracle WebCenter Portal, and the management of customer services, who continuously supported the project setup. ” – Joerg Modlmayr, Project Manager, Healthcare Customer Service Portal, Siemens AG The Siemens Healthcare Sector is one of the world's largest suppliers to the healthcare industry and a trendsetter in medical imaging, laboratory diagnostics, medical information technology and hearing aids. Siemens offers its customers products and solutions for the entire range of patient care from a single source – from prevention and early detection to diagnosis, and on to treatment and aftercare. By optimizing clinical workflows for the most common diseases, Siemens also makes healthcare faster, better and more cost-effective. To ensure greater transparency, increased efficiency, higher user acceptance, and additional services, Siemens AG, Sector Healthcare, replaced several existing legacy portal solutions that could not meet the company’s future needs with Oracle WebCenter Portal. Various existing portal solutions that cannot meet future demands will be successively replaced by the new central service portal, which will also allow for the efficient and intuitive implementation of new service concepts.  With Oracle, doctors and hospitals using Siemens medical solutions now have access to a central information portal that provides important information and services at just the push of a button.  Customer Name – Siemens AG, Sector Healthcare Customer URL – www.siemens.com Customer Headquarters – Erlangen, Germany Industry – Industrial Manufacturing Employees – 360,000  Challenges – Replace disparate medical service portals to meet future demands and eliminate an  unnecessarily high level of administrative work caused by heterogeneous installations Ensure portals meet current user demands to improve user-acceptance rates and increase number of total users Enable changes and expansion through standard functionality to eliminate the need for reliance on IT and reduce administrative efforts and associated high costs Ensure efficient and intuitive implementation of new service concepts for all devices and systems Ensure hospitals and clinics to transparently monitor and measure services rendered for the various medical devices and systems  Increase electronic interaction and expand services to achieve a higher level of customer loyalty Solution –  Deployed Oracle WebCenter Portal to ensure greater transparency, and as a result, a higher level of customer loyalty  Provided a centralized platform for doctors and hospitals using Siemens’ medical technology solutions that provides important information and services at the push of a button Reduced significantly the administrative workload by centralizing the solution in the new customer service portal Secured positive feedback from customers involved in the pilot program developed by design experts from Oracle partner Sapient. The interfaces were created with customer needs in mind. The first survey taken shortly after implementation came back with 2.4 points on a scale of 0-3 in the category “customer service portal intuitiveness level” Met all requirements including alignment with the Siemens Style Guide without extensive programming Implemented additional services via the portal such as benchmarking options to ensure the optimal use of the Customer Device Park Provided option for documentation of all services rendered in conjunction with the medical technology systems to ensure that the value of the services are transparent for the decision makers in the hospitals  Saved and stored all machine data from approximately 100,000 remote systems in the central service and information platform Provided the option to register errors online and follow the call status in real-time on the portal Made  available at the push of a button all information on the medical technology devices used in hospitals or clinics—from security checks and maintenance activities to current device statuses Provided PDF format Service Performance Reports that summarize information from periods of time ranging from previous weeks up to one year, meeting medical product law requirements  Why Oracle – Siemens AG favored Oracle for many reasons, however, the company ultimately decided to go with Oracle due to the enormous range of functionality the solutions offered for the healthcare sector.“We are not programmers; we are service providers in the medical technology segment and focus on the contents of the portal. All the functionality necessary for internet-based customer interaction is already standard in Oracle WebCenter Portal, which is a huge plus for us. Having Oracle as our technology partner ensures that the product will continually evolve, providing a strong technology platform for our customer service portal well into the future,” said Joerg Modlmayr project manager, Healthcare Customer Service Portal, Siemens AG. Partner Involvement – Siemens AG selected Oracle Partner Sapient because the company offered a service portfolio that perfectly met Siemens’ requirements and had a wealth of experience implementing Oracle WebCenter Portal. Additionally, Sapient had designers with a very high level of expertise in usability—an aspect that Siemens considered to be of vast importance for the project.  “The Sapient team completely met all our expectations. Our tightly timed project was completed on schedule, and the positive feedback from our users proves that we set the right measures in terms of usability—all thanks to the folks at Sapient,” Modlmayr said.  Partner Name – Sapient GmbH Deutschland Partner URL – www.sapient.com

    Read the article

  • Disabling CPU management

    - by Tiffany Walker
    If I add the following processor.max_cstate=0 to the kernel command line for boot up, does that disable all CPU power management and throttling? I also found: http://www.experts-exchange.com/OS/Linux/Administration/A_3492-Avoiding-CPU-speed-scaling-in-modern-Linux-distributions-Running-CPU-at-full-speed-Tips.html The link talks of Change CPU governor from 'ondemand' to 'performance' for all CPUs/cores and disabling kondemand from kernel. Server is for web hosting UPDATES: 2.6.32-379.1.1.lve1.1.7.6.el6.x86_64 #1 SMP Sat Aug 4 09:56:37 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux . # dmidecode 2.11 SMBIOS 2.6 present. 74 structures occupying 2878 bytes. Table at 0x0009F000. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: American Megatrends Inc. Version: 1.0c Release Date: 05/27/2010 Address: 0xF0000 Runtime Size: 64 kB ROM Size: 4096 kB Characteristics: ISA is supported PCI is supported PNP is supported BIOS is upgradeable BIOS shadowing is allowed ESCD support is available Boot from CD is supported Selectable boot is supported BIOS ROM is socketed EDD is supported 5.25"/1.2 MB floppy services are supported (int 13h) 3.5"/720 kB floppy services are supported (int 13h) 3.5"/2.88 MB floppy services are supported (int 13h) Print screen service is supported (int 5h) 8042 keyboard services are supported (int 9h) Serial services are supported (int 14h) Printer services are supported (int 17h) CGA/mono video services are supported (int 10h) ACPI is supported USB legacy is supported LS-120 boot is supported ATAPI Zip drive boot is supported BIOS boot specification is supported Targeted content distribution is supported BIOS Revision: 8.16 Handle 0x0001, DMI type 1, 27 bytes System Information Manufacturer: Supermicro Product Name: X8SIE Version: 0123456789 Serial Number: 0123456789 UUID: 49434D53-0200-9033-2500-33902500D52C Wake-up Type: Power Switch SKU Number: To Be Filled By O.E.M. Family: To Be Filled By O.E.M. Handle 0x0002, DMI type 2, 15 bytes Base Board Information Manufacturer: Supermicro Product Name: X8SIE Version: 0123456789 Serial Number: VM11S61561 Asset Tag: To Be Filled By O.E.M. Features: Board is a hosting board Board is replaceable Location In Chassis: To Be Filled By O.E.M. Chassis Handle: 0x0003 Type: Motherboard Contained Object Handles: 0 Handle 0x0003, DMI type 3, 21 bytes Chassis Information Manufacturer: Supermicro Type: Sealed-case PC Lock: Not Present Version: 0123456789 Serial Number: 0123456789 Asset Tag: To Be Filled By O.E.M. Boot-up State: Safe Power Supply State: Safe Thermal State: Safe Security Status: None OEM Information: 0x00000000 Height: Unspecified Number Of Power Cords: 1 Contained Elements: 0

    Read the article

  • Windows Server (SBS) 2008 - Telephony service won't start (missing permissions)

    - by Uri
    I am running a SBS 2008 server. It's setup as the domain controller for the network. After a reboot, the Telephony service (and all services that depend on it) refuses to start under the Network Service account. The error given is: Error 1297: A privilege that the service requires to function properly does not exist in the service account configuration. You may use the Services Microsoft Management Console (MMC) snap-in (services.msc) and the Local Security Settings MMC snap-in (secpol.msc) to view the service configuration and the account configuration. This has caused all the network services not to be accessible e.g. terminal services, VPN (RRAS), SQL Server instances. The SSH daemon I have running on the box will accept connections only from localhost, but won't respond on the network. After searching around, the only advice I could find was to grant the Network Service account these permissions: Adjust memory quotas for a process Replace a process level token I set those permissions on both the Default Domain Policy and the Default Domain Controller Policy, but it seemingly had no effect. Most of the services will start if I change them to run under the Local System account, but that didn't make them accessible on the network. I even tried removing the Routing and Remote Access Services feature, rebooting and reinstalling it, but the issue remains. Any ideas?

    Read the article

  • Setting up HTTPS across multiple servers

    - by JohnyD
    I'm looking to offer our online services over https and I'm having a couple of problems understanding how to accomplish this. To access our services you must pass through our ISA firewall to a Win2000 server running IIS6. About half our services are located here and the other half take you to a Win2003 server also running IIS6. So, in order to achieve this must each server have the proper certificate installed? ISA, IIS6_1 and IIS6_2? Is there a separate configuration that must be made to our ISA firewall? The other problem is with the CA and knowing how many certificates I need. It's important to note that the domain name for our services on IIS6_1 is www.domainname.com but the domain name on IIS6_2 is services.domainname.com. I believe that this will require me to purchase more than one certificate. It looks as though we will be going with Thawte's SSL123 as it's a good name and it's fast to get. Will I need to purchase 2 certificates (one for www that will be installed on our ISA firewall as well as IIS6_1, and one for services.domainname.com on IIS6_2)? Or will I need to purchase 3, the extra one being used on our firewall server? Another side question is about SAN's (subject alternative names). Is this basically adding sub-domains to your cert? So I could purchase one cert with 1 SAN for my www and services.? Thanks a lot for your help! Please let me know if I can provide any further information.

    Read the article

  • JSON and jQuery.ajax

    - by Andreas
    Hello, im trying to use the jQuery UI autocomplete to communitate with a webservice with responseformate JSON, but i am unable to do so. My webservice is not even executed, the path should be correct since the error message does not complain about this. What strikes me is the headers, response is soap but request is json, is it supposed to be like this? Response Headersvisa källkod Content-Type application/soap+xml; charset=utf-8 Request Headersvisa källkod Accept application/json, text/javascript, */* Content-Type application/json; charset=utf-8 The error message i get is as follows (sorry for the huge message, but it might be of importance): soap:ReceiverSystem.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Xml.XmlException: Data at the root level is invalid. Line 1, position 1. at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.Throw(String res, String arg) at System.Xml.XmlTextReaderImpl.ParseRootLevelWhitespace() at System.Xml.XmlTextReaderImpl.ParseDocumentContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Web.Services.Protocols.SoapServerProtocol.SoapEnvelopeReader.Read() at System.Xml.XmlReader.MoveToContent() at System.Web.Services.Protocols.SoapServerProtocol.SoapEnvelopeReader.MoveToContent() at System.Web.Services.Protocols.SoapServerProtocolHelper.GetRequestElement() at System.Web.Services.Protocols.Soap12ServerProtocolHelper.RouteRequest() at System.Web.Services.Protocols.SoapServerProtocol.RouteRequest(SoapServerMessage message) at System.Web.Services.Protocols.SoapServerProtocol.Initialize() at System.Web.Services.Protocols.ServerProtocolFactory.Create(Type type, HttpContext context, HttpRequest request, HttpResponse response, Boolean& abortProcessing) --- End of inner exception stack trace --- This is my code: $('selector').autocomplete({ source: function(request, response) { $.ajax({ url: "../WebService/Member.asmx", dataType: "json", contentType: "application/json; charset=utf-8", type: "POST", data: JSON.stringify({prefixText: request.term}), success: function(data) { alert('success'); }, error: function(XMLHttpRequest, textStatus, errorThrown){ alert('error'); } }) }, minLength: 1, select: function(event, ui) { } }); And my webservice looks like this: [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] [ScriptService] public class Member : WebService { [WebMethod(EnableSession = true)] [ScriptMethod(ResponseFormat = ResponseFormat.Json)] public string[] GetMembers(string prefixText) { code code code } } What am i doing wrong? Thanks in advance :)

    Read the article

  • singleton pattern in Windows Activation Service

    - by Joshua
    Hello I have a few WCF services that are currently being self hosted, in a very basic NT Service. I want to expand my application to add provisioning of WCF Services, and updates, as well as isolation (I want each WCF Service to be in its own AppDomain). These WCF Services contain logic that needs to be run on a regular basis, pinging the database, and getting information from external devices so that when a request comes in the data is readily available. I'm thinking about trying out Windows Activation Service, because i really like the provisioning, and isolation that comes with a managed services infrastructure. If I didn't use WAS I would essentially have to write the same code myself. From what I understand though WAS does not really support the model of having a service that is running before someone actually calls a method on the service. the article I read here MSDN Article Link states "That means in essence that out-of-the-box WAS hosting is not something that is really suited for sessionful or singleton services. It is more suitable for stateless per-call services." it does say that "Out of the box" so I'm wondering if anyone has used WAS to host a WCF service that really behaves more like an NT Service (starting and stopping independantly of having a method called upon it). Or any other ideas would be great. I was planning on writting this infrastructure myself, to host WCF services in a custom ServiceHost, and put their execution in a seporate AppDomain, as well as allow for provision of these services after initial installation, along with updates. However, I would MUCH MUCH MUCH rather not own that code if I don't have to. thanks Joshua

    Read the article

  • Call WCF Service Through Javascript, AJAX, or JQuery

    - by obautista
    I created a number of standard WCF Services (Service Contract and Host (svc) are in separate assemblies). I fired up a Web Site in IIS to host the Services (i.e., address is http://services:1000/wcfservices.svc). Then in my Web Site project I added the reference. I am able to call the services normally. I am needed to call some of the services client side. Not sure if I should be looking at articles calling WCF services through AJAX, JQuery, or JSON enabled WCF Services. Can anyone provide any thoughts or experience with configuring as such? Some of the changes I made was adding the following to the Operation Contract: [OperationContract] [WebInvoke(Method = "POST", UriTemplate = "SetFoo")] void SetFoo(string Id); Then this above the implementation of the interface: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] Then in the service webconfig I have this (parens are angle brackets): <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <baseAddressPrefixFilters> <add prefix="http://services:1000/wcfservices.svc/"/>> </baseAddressPrefixFilters> </serviceHostingEnvironment> <serviceHostingEnvironment multipleSiteBindingsEnabled="false" /> Then in the client side I attempted this: <asp:ScriptManagerProxy ID="ScriptManagerProxy1" runat="server"> <compositeScript> <Scripts> <asp:ScriptReference Path="http://Flixsit:1000/FlixsitWebServices.svc" /> </Scripts> </CompositeScript> </asp:ScriptManagerProxy> I am attempting to call the service like this in javascript: wcfservices.SetFoo(string Id); Nothing is working. If it is idea or a better solution to call JSON enable, JQuery, etc....I am willing to make any changes. Thanks for any suggestions/tips provided....

    Read the article

  • SQL 2008 Querying Soap XML

    - by Vince
    I have been trying to process this SOAP XML return using SQL but all I get is NULL or nothing at all. I have tried different ways and pasted them all below. Declare @xmlMsg xml; Set @xmlMsg = '<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <SendWarrantyEmailResponse xmlns="http://Web.Services.Warranty/"> <SendWarrantyEmailResult xmlns="http://Web.Services.SendWarrantyResult"> <WarrantyNumber>120405000000015</WarrantyNumber> <Result>Cannot Send Email to anonymous account!</Result> <HasError>true</HasError> <MsgUtcTime>2012-06-07T01:11:36.8665126Z</MsgUtcTime> </SendWarrantyEmailResult> </SendWarrantyEmailResponse> </soap:Body> </soap:Envelope>'; declare @table table (data xml); insert into @table values (@xmlMsg); select data from @table; WITH xmlnamespaces ('http://schemas.xmlsoap.org/soap/envelope/' as [soap], 'http://Web.Services.Warranty' as SendWarrantyEmailResponse, 'http://Web.Services.SendWarrantyResult' as SendWarrantyEmailResult) SELECT Data.value('(/soap:Envelope[1]/soap:Body[1]/SendWarrantyEmailResponse[1]/SendWarrantyEmailResult[1]/WarrantyNumber[1])[1]','VARCHAR(500)') AS WarrantyNumber FROM @Table ; ;with xmlnamespaces('http://schemas.xmlsoap.org/soap/envelope/' as [soap],'http://Web.Services.Warranty' as SendWarrantyEmailResponse,'http://Web.Services.SendWarrantyResult' as SendWarrantyEmailResult) --select @xmlMsg.value('(/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult/WarrantyNumber)[0]', 'nvarchar(max)') --select T.N.value('.', 'nvarchar(max)') from @xmlMsg.nodes('/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult') as T(N) select @xmlMsg.value('(/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult/HasError)[1]','bit') as test ;with xmlnamespaces('http://schemas.xmlsoap.org/soap/envelope/' as [soap],'http://Web.Services.Warranty' as [SendWarrantyEmailResponse],'http://Web.Services.SendWarrantyResult' as [SendWarrantyEmailResult]) SELECT SendWarrantyEmailResult.value('WarrantyNumber[1]','varchar(max)') AS WarrantyNumber, SendWarrantyEmailResult.value('Result[1]','varchar(max)') AS Result, SendWarrantyEmailResult.value('HasError[1]','bit') AS HasError, SendWarrantyEmailResult.value('MsgUtcTime[1]','datetime') AS MsgUtcTime FROM @xmlMsg.nodes('/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult') SendWarrantyEmailResults(SendWarrantyEmailResult)

    Read the article

  • Constructor on type: "Namespace.type" not found.

    - by Nick
    Hello, I am using Castle.Windsor as an IOC. So I am trying to resolve a service type in the constructor of an HTTPHandler. I keep receiving this error, "Constructor on type: "Namespace.type" not found." My configuration has the following entries for service type: IDocumentDirectory <component id="restricted.content.directory" service="org.healthwise.foundations.services.content.IDocumentDirectory, org.healthwise.foundations.services" type="org.healthwise.foundations.services.content.RestrictedLocalizationDocumentDirectory, org.healthwise.foundations.services"> <parameters> <contentDirectory>${content.directory}</contentDirectory> <localizations> <array> <item>en-us</item> <item>es-us</item> </array> </localizations> </parameters> </component> <component id="content.directory" service="org.healthwise.foundations.services.content.IDocumentDirectory, org.healthwise.foundations.services" type="org.healthwise.foundations.services.web.client.WebServiceDocumentDirectory, org.healthwise.foundations.services.web.client"> <parameters> <webServiceURL>#{contentDirectoryWebsiteUrl}</webServiceURL> </parameters> </component> In my new handler the constructor looks like this: public HeartBeatHttpHandler(IDocumentDirectory contentDirectory) { _contentDirectory = contentDirectory; } I have never recieved this error using Castle.Windsor. Can someone explain? Thanks!

    Read the article

  • Active Directory Services: PrincipalContext -- What is the DN of a "container" object?

    - by Ranger Pretzel
    I'm currently trying to authenticate via Active Directory Services using the PrincipalContext class. I would like to have my application authenticate to the Domain using Sealed and SSL contexts. In order to do this, I have to use the following constructor of PrincipalContext (link to MSDN page): public PrincipalContext( ContextType contextType, string name, string container, ContextOptions options ) Specifically, I'm using the constructor as so: PrincipalContext domainContext = new PrincipalContext( ContextType.Domain, domain, container, ContextOptions.Sealing | ContextOptions.SecureSocketLayer); MSDN says about "container": The container on the store to use as the root of the context. All queries are performed under this root, and all inserts are performed into this container. For Domain and ApplicationDirectory context types, this parameter is the distinguished name (DN) of a container object. What is the DN of a container object? How do I find out what my container object is? Can I query the Active Directory (or LDAP) server for this?

    Read the article

  • Is it possible to force an error in an Integration Services data flow to demonstrate its rollback?

    - by Matt
    I have been tasked with demoing how Integration Services handles an error during a data flow to show that no data makes it into the destination. This is an existing package and I want to limit the code changes to the package as much as possible (since this is most likely a one time deal). The scenario that is trying to be understood is a "systemic" failure - the source file disappears midstream, or the file server loses power, etc. I know I can make this happen by having the Error Output of the source set to Failure and introducing bad data but I would like to do something lighter than that. I suppose I could add a Script Transform task and look for a certain value and throw an error but I was hoping someone has come up with something easier / more elegant. Thanks, Matt

    Read the article

  • Where to find great proxy servers for testing GeoIP services?

    - by Andreas
    We would like to test a GeoIP-Service. Therefore we need to go to the site with an IP from another country. There are a lot of free proxy lists like http://nntime.com/proxy-country/ The problem with them is, that only the CoDeen-Proxies are working. But with CoDeen you can't select your country of origin (the same as with TOR). You get redirected to a random proxy in the network. Where to find good proxy server for testing the GeoIP Services? Free proxy servers would be great, but if they cost something small that doesn't matter.

    Read the article

  • WCF Rest services for use with the repository pattern?

    - by mark smith
    Hi there, I am considering moving my Service Layer and my data layer (repository pattern) to a WCF Rest service. So basically i would have my software installed locally (WPF client) which would call the Service Layer that exists via a Rest Service... The service layer would then call my data layer using a WCF Rest Service also OR maybe just call it via the DLL assembly I was hoping to understand what the performance would be like. Currently I have my datalayer and servicelayer installed locally via DLL Assemblies locally on the pc. Also i presume the WCF REST services won't support method overloading hance the same name but with a different signature?? I would really appreciate any feedback anyone can give. Thanks

    Read the article

  • SQL Server Reporting Services 2008: How to set the credentials property properly?

    - by wgpubs
    No matter how I configure the Credentials property I get a 401 exception when I try to Render the report. Here is my (latest) code: var rs = new ReportExecutionService(); rs.Url = "https://myserver/reportserver/reportexecution2005.asmx"; var myCache = new System.Net.CredentialCache(); myCache.Add(new Uri(rs.Url), "kerberos" , new System.Net.NetworkCredential("username", "password", "Domain")); rs.Credentials = myCache; The URL and credentials are all correct. But still getting a 401 when I cal rs.Render(...). The Reporting Services install is sitting on a Windows Server 2008 box and requires integrated authentication. Thanks

    Read the article

  • How can I upload a file to a Sharepoint Document Library using Silverlight and client web-services?

    - by pclem12
    Most of the solutions I've come across for Sharepoint doc library uploads use the HTTP "PUT" method, but I'm having trouble finding a way to do this in Silverlight because it has restrictions on the HTTP Methods. I visited this http://msdn.microsoft.com/en-us/library/dd920295(VS.95).aspx to see how to allow PUT in my code, but I can't find how that helps you use an HTTP "PUT". I am using client web-services, so that limits some of the Sharepoint functions available. That leaves me with these questions: Can I do an http PUT in Silverlight? If I can't or there is another better way to upload a file, what is it? Thanks

    Read the article

  • How to create Web services and link it to Smart device Application?

    - by Crimsonland
    I am a fresh graduate and hired to develop Smart Device Application.They use Data logic Memoir with windows CE 5.0. Even though i have novice skills in programming in vb.net,I just finish my project and applications for Data logic memoir wherein the data has been save to text file or SQL compact server database in the Handheld Device and use Active Sync Connection to pass Data into H/PC to Desktop P.C and Vice Versa.I just shift now to C#.net and start learning it and i just successfully transpose my Smart Device Project from VB to C#. But now i want to Start Projects linking The device to server using Web services but i don't have any idea how to begin?

    Read the article

  • Quality Design for Asynchronous WCF Services Calls in a Middle-Tier and Returning Data to UI Tier

    - by Perplexed
    I have a WPF application with a group of asynchronous WCF service calls all mashed into the code behind, complete with event handlers and everything, that I have to refactor to productionize and maintain. I want to separate concerns here for maintainability and all the other good reasons to do this, but I'm not sure exactly how to achieve this. Anybody have any good ideas on how to do this, or at least some links to put me in the right direction? My thinking: Create an "infrastructure" layer and reference the services there. Move the asynchronous event handlers into this layer. When an update is called, I will bubble up my own event with my own derivation of the EventArgs class that contains the data the UI will need. I'll have a fairly coupled hooking of the UI to the infrastructure layer as it will consume events I fire off upon completion of an asynchronous data call.

    Read the article

  • Mac: How to see list of running network services?

    - by Jordan
    I am writing an application that needs to connect with a running network service on a Mac. Problem is, I have no idea what the service is called or even what port it uses. Is there a way to browse all running network services on my Mac? More info: I am connecting to a MIDI network session (found under 'Audio MIDI settings', present on all OSX installs). Am I correct in thinking this is a network service? I am planning to use NSNetServiceBrowser to locate all local computers running this service. (is this the best way to go about it?) Any help is much appreciated - thanks!

    Read the article

  • iPhone / Android: what protocol stacks do apps use for connecting to centralised services?

    - by Richard
    Hi All, Aplogies for the ignorant question, I have no experience with app development on any mobile platform. Basically what I want to know is what communication protocols do apps typically use for accessing/querying centralised services? E.g if I port a webapp/service to iPhone/Andriod, typically how would I access/query this web service in my app? E.g is it over HTTP, or are there other protocols? Also, presumably the GUI of an app is constructed with Apple/Andriod GUI libraries (in java? cocoa?). Can an app GUI be defined with HTML/javascript like a webpage? Sorry again for the pure noob questions. Thanks

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >