Search Results

Search found 52602 results on 2105 pages for 'amazon web services'.

Page 175/2105 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • How to use html and JavaScript in Content Editor web part in SharePoint2010

    - by ybbest
    Here are the steps you need to take to use html and JavaScript in content editor web part. 1. Edit a site page and add a content editor web part on the page. 2. After the content editor is added to the page, it will display on the page like shown below 3. Next, upload your html and JavaScript content as a text file to a document library inside your SharePoint site. Here is the content in the document <script type="text/javascript"> alert("Hello World"); </script> 4. Edit the content editor web part and reference the file you just uploaded. 5. Save the page and you will see the hello world prompt. References: http://stackoverflow.com/questions/5020573/sharepoint-2010-content-editor-web-part-duplicating-entries http://sharepointadam.com/2010/08/31/insert-javascript-into-a-content-editor-web-part-cewp/

    Read the article

  • Can I replicate data between mySQL and SQL Server/SQL Azure?

    - by Ernest Mueller
    I have a replicated mySQL setup running happily on Amazon AWS, making user data available locally in various regions. Now I'm faced with an app that needs to go up on Microsoft Azure and I need to replicate the data over to there as well. So that's annoying. I am faced with several options: Replicate from mySQL to SQL Azure/SQL Server seems like it would be lovely - is this possible? I'd consider using a third party tool and paying $$ if I had to. We're not using anything complicated in the db feature set, it's just data in tables. Get mySQL working on Microsoft Azure - which seems really dicey at best. All the HOWTOs I can find say "this is possible but you really shouldn't try this for production apps." Go non-realtime and do syncs from mySQL to SQL Azure, which may be somewhat expensive and slower. Rip out all my mySQL on Amazon and use SQL Server there, which would make Baby Jesus cry. Has anyone gotten mySQL to SQL Azure/SQL Server replication or syncing working? Or have any other approaches (a NoSQL solution that replicates and might meet our but-we-need-to-join-some-tables needs that can easily be run on Amazon and Azure)?

    Read the article

  • How can I debug a port/connectivity issue?

    - by rfw21
    I am running a simple WebSocket server on Amazon EC2 (Fedora Core). I've opened the relevant port using ec2-authorize, and checked that it's opened. Iptables is definitely not running. However I can't connect to the port from outside EC2. I've tried the following (my server is running on port 7000): telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (from within EC2: connects fine) nmap localhost (output includes line: 7000/tcp open afs3-fileserver) telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (this time from my local machine: I get "connection refused: Unable to connect to remote host") The strange thing is this: if I start Nginx on port 7000 then it works and I can connect from outside EC2! And the WebSocket server fails on port 80, where Nginx works fine. To me this suggests a problem with the WebSocket server, BUT I can connect to it successfully from within EC2. (And it works fine on a different VPS account). How can I debug this further? If anybody can stop me tearing my hair out, I'd be very grateful indeed :)

    Read the article

  • Mise en oeuvre de composants web via le framework Ellipse, par l'équipe Ellipse

    L'objectif de ce nouveau tutoriel est de vous montrer comment créer des composants web réutilisables via le framework Ellipse. Deux types de composants web peuvent être envisagés : un composant web définit dans une simple classe Java, ou bien un composant web à base d'un modèle XML (TemplatedComponent) afin d'y stocker la mise en page pour le rendu HTML. Bien entendu, nous allons utiliser le plugin Ellipse afin de nous simplifier la construction de ces éléments. Ce tutoriel est accessible à partir de l'adresse suivante : http://ellipse.developpez.com/tutori...s-web-ellipse/ Si vous avez des commentaires ou des questions, n'hésitez pas à compléter cette discussion.

    Read the article

  • Configure firewall (Shorewall/UFW) to allow traffic for services on an Ubuntu Server

    - by Niklas
    I have an Ubuntu Server 11.04 x64 which I want to secure. The server will be open to Internet and I want to be able to SSH/SFTP into the machine and the SSH-server runs on a custom set port. I also want a web server accessible from the Internet. These tasks seems not to hard to perform but I also want SAMBA-shares to be accessible from within the local network and this seems to be a bit trickier. If possible I also want to be able to "stealth" the ports necessary to protect the server further but also allow the SAMBA-shares to be automatically found within the local network. I've never configured firewalls before except for a router and I always bump into a bunch of problem when doing it all by myself so I was hoping for some tips or preferably a guide on how to this. Thank you! Update: On second thought I'd could just as likely go with UFW if the same settings are achievable ("stealth" ports).

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [concepts]

    - by Your DisplayName here!
    ACS v2 support two fundamental types of client identities– I like to call them “enterprise identities” (WS-*) and “web identities” (Google, LiveID, OpenId in general…). I also see two different “mind sets” when it comes to application design using the above identity types: Enterprise identities – often the fact that a client can present a token from a trusted identity provider means he is a legitimate user of the application. Trust relationships and authorization details have been negotiated out of band (often on paper). Web identities – the fact that a user can authenticate with Google et al does not necessarily mean he is a legitimate (or registered) user of an application. Typically additional steps are necessary (like filling out a form, email confirmation etc). Sometimes also a mixture of both approaches exist, for the sake of this post, I will focus on the web identity case. I got a number of questions how to implement the web identity scenario and after some conversations it turns out it is the old authentication vs. authorization problem that gets in the way. Many people use the IsAuthenticated property on IIdentity to make security decisions in their applications (or deny user=”?” in ASP.NET terms). That’s a very natural thing to do, because authentication was done inside the application and we knew exactly when the IsAuthenticated condition is true. Been there, done that. Guilty ;) The fundamental difference between these “old style” apps and federation is, that authentication is not done by the application anymore. It is done by a third party service, and in the case of web identity providers, in services that are not under our control (nor do we have a formal business relationship with these providers). Now the issue is, when you switch to ACS, and someone with a Google account authenticates, indeed IsAuthenticated is true – because that’s what he is! This does not mean, that he is also authorized to use the application. It just proves he was able to authenticate with Google. Now this obviously leads to confusion. How can we solve that? Easy answer: We have to deal with authentication and authorization separately. Job done ;) For many application types I see this general approach: Application uses ACS for authentication (maybe both enterprise and web identities, we focus on web identities but you could easily have a dual approach here) Application offers to authenticate (or sign in) via web identity accounts like LiveID, Google, Facebook etc. Application also maintains a database of its “own” users. Typically you want to store additional information about the user In such an application type it is important to have a unique identifier for your users (think the primary key of your user database). What would that be? Most web identity provider (and all the standard ACS v2 supported ones) emit a NameIdentifier claim. This is a stable ID for the client (scoped to the relying party – more on that later). Furthermore ACS emits a claims identifying the identity provider (like the original issuer concept in WIF). When you combine these two values together, you can be sure to have a unique identifier for the user, e.g.: Facebook-134952459903700\799880347 You can now check on incoming calls, if the user is already registered and if yes, swap the ACS claims with claims coming from your user database. One claims would maybe be a role like “Registered User” which can then be easily used to do authorization checks in the application. The WIF claims authentication manager is a perfect place to do the claims transformation. If the user is not registered, show a register form. Maybe you can use some claims from the identity provider to pre-fill form fields. (see here where I show how to use the Facebook API to fetch additional user properties). After successful registration (which may include other mechanisms like a confirmation email), flip the bit in your database to make the web identity a registered user. This is all very theoretical. In the next post I will show some code and provide a download link for the complete sample. More on NameIdentifier Identity providers “guarantee” that the name identifier for a given user in your application will always be the same. But different applications (in the case of ACS – different ACS namespaces) will see different name identifiers. This is by design to protect the privacy of users because identical name identifiers could be used to create “profiles” of some sort for that user. In technical terms they create the name identifier approximately like this: name identifier = Hash((Provider Internal User ID) + (Relying Party Address)) Why is this important to know? Well – when you change the name of your ACS namespace, the name identifiers will change as well and you will will lose your “connection” to your existing users. Oh an btw – never use any other claims (like email address or name) to form a unique ID – these can often be changed by users.

    Read the article

  • EJB Lifecycle and Relation to WARs

    - by Adam Tannon
    I've been reading up on EJBs (3.x) and believe I understand the basics. This question is a "call for confirmation" that I have interpreted the Java EE docs correctly and that I understand these fundamental concepts: An EJB is to an App Container as a Web App (WAR) is to a Web Container Just like you deploy a WAR to a Web Container, and that container manages your WAR's life cycle, you deploy an EJB to an App Container, and the container manages your EJB's life cycle When the App Container fires up and deploys an EJB, it is given a unique "identifier" and URL that can be used by JNDI to look up the EJB from another tier (like the web tier) So, when your web app wants to invoke one of your EJB's methods, it looks the EJB up using some kind of service locator (JNDI) and invoke the method that way Am I on-track or way off-base here? Please correct me & clarify for me if any of these are incorrect. Thanks in advance!

    Read the article

  • Calling Web Service with Complex Parameters in ADF Mobile

    - by Shay Shmeltzer
    Many of the SOAP based web services out there have parameters of specific object types - so not just simple String/int inputs. The ADF Web service data control makes it quite simple to interact with them. And this applies also in the case of ADF Mobile. Since there were several thread on OTN asking about this - I thought I'll do a quick demo to refresh people memory about how you pass these "complex" parameters to your Web service methods. By the way - this video is also relevant if you are not doing mobile development, you'll basically use the exact same process for building "regular web" ADF applications that access these types of Web services. One more thing you might want to do after you create the page is look at the binding tab to see the method call in there, and notice the parameters for it in the structure property. Go and look at their NDValue property to get the complete picture.

    Read the article

  • Servlet : Usage of Constants.java class vs context param

    - by Pongsakorn Semsuwan
    I'm just wondering whether to keep some of my variables in Constants class or keep it in web.xml Say, I want to keep a variable of Facebook graph API prefix or api_key, client_id From my understand, the difference between Constants.java and web.xml is web.xml is easier to rewrite on compile using ant. So, you can replace your variables in web.xml according to what environment you are building you app for. (client_id varies by development environment/production environment, for example) If I understand it right, then Facebook graph API prefix should be kept in Constants.java (because it always is "https://graph.facebook.com/") and api_key, client_id should be kept in web.xml? What's the proper way to use them?

    Read the article

  • What web hosts support multi-domain SSL?

    - by Bryan Hadaway
    For Consideration - Please do not close or refer this question to: How to find web hosting that meets my requirements? The above link does not refer to SSL certificates in any manner. This question has a very specific objective of listing known web hosts that support this new SSL technology. If I'm not mistaken, multi-domain (not wildcard) SSL is a relatively new technology that is not hugely supported or well-known/advertised yet? I'm having a difficult time discovering which web hosts support the technology (again because it's not popular enough yet to advertise on feature lists). Here is what I've discovered so far: Web Hosts That DO NOT Support Multi-domain SSL BlueHost/HostMonster DreamHost Web Hosts That DO Support Multi-domain SSL FireHost HostGator Please note that SUPPORT doesn't necessarily mean they offer the SSL certs themselves and you may need to purchase separately.

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • Google I/O 2012 - Fast UIs for the Cross-Device Web

    Google I/O 2012 - Fast UIs for the Cross-Device Web Boris Smus One of the great features of the modern web is that sites work on any device with a browser. This session will focus on creating UIs for the cross-device web. We will cover building web sites that support multiple device form factors (responsive and non-responsive approaches), discuss single page sites and some of the layout features in modern mobile browsers, and do a deep dive into multi-touch input on the web. Finally, we'll show some of the awesome new mobile debugging tools in Chrome and Chrome for Android. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 105 3 ratings Time: 49:31 More in Science & Technology

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Search Engine Optimization (SEO)

    To continue our series, lets look at SEO and Silverlight.  The vast majority of web traffic is driven by search. Search engines are the first stop for many users on the public internet and is increasingly so in corporate environments as well.  Search is also the key technology that drives most ad revenue.  So needless to say, SEO is important.  But how does SEO work in a Silverlight application where most of the interesting content is dynamically generated?   I will...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Chrome Mobile: The Mobile Web Developers Toolkit (Part 2)

    Chrome Mobile: The Mobile Web Developers Toolkit (Part 2) Building for mobile web requires a different mindset than desktop web development, and a different set of tools. The tools we're used to using often aren't available or would take up too much screen real estate. And going back to the dark ages of tweak/save/deploy/test/repeat isn't exactly optimal, so what can we do? Thankfully there are a number of great options - from remote debugging to emulation, mobile browsers are offering more and more tools to make our lives easier. We'll take a look at a couple of tools that you can use today to make cross platform mobile web development easier and then peer into the crystal ball to see what tools may bring in the future. Join us for Part 2 - as we take a look at a some of the many tools to make testing the mobile web easier. From: GoogleDevelopers Views: 0 0 ratings Time: 01:00:00 More in Science & Technology

    Read the article

  • How Web Optimization Services Work to Increase Your on the Internet Reputation

    SEO is a symbol of search engine optimization and that is the key to success from the enterprise. No site has meaning if it seriously isn't properly promoted. Anytime any web surfer is in look up of any certain merchandise, providers or data he makes use of the easiest method of browsing as a result of search engine optimization and this is habit of many individuals to only search straight into 5 or six major sites for their goal. No person has time to seem directly into 100 pages of internet search engine as there is no need to have when he finds in major pages.

    Read the article

  • How to fix “Error: This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application.”

    - by ybbest
    Problem: When I try to deploy my custom wsp solution to a specific web application, I got the error below: This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application. Analysis: The error message itself explains why you cannot deploy the solution to a web application. However if you do not like to deploy the solution to all the web applications and only like to deploy your solution to a specific application , you need to change the solution settings Assembly Deployment Target from GlobalAssemblyCache to WebApplication. From: TO: Solution: After you change the Assembly Deployment Target and run the script again, you will have the solution deployed successfully. References: http://blogs.msdn.com/b/jjameson/archive/2007/06/17/issues-deploying-sharepoint-solution-packages.aspx

    Read the article

  • Radio Button Web Control Basics in ASP.NET 3.5

    The use of the radio button is widespread in common and basic web applications. Web forms use the radio button to allow the user to select only one option from a list of choices. A common implementation is in surveys. This tutorial will focus on developing web forms that use radio button web controls in ASP.NET 3.5 in Visual Web Developer Express. This is a basic tutorial which should be very useful for new ASP.NET developers who need to learn how to incorporate radio buttons into their ASP.NET projects.... Cloud Servers in Demand - GoGrid Start Small and Grow with Your Business. $0.10/hour

    Read the article

  • Chrome Mobile: The Mobile Web Developers Toolkit (Part 1)

    Chrome Mobile: The Mobile Web Developers Toolkit (Part 1) Building for mobile web requires a different mindset than desktop web development, and a different set of tools. The tools we're used to using often aren't available or would take up too much screen real estate. And going back to the dark ages of tweak/save/deploy/test/repeat isn't exactly optimal, so what can we do? Thankfully there are a number of great options - from remote debugging to emulation, mobile browsers are offering more and more tools to make our lives easier. We'll take a look at a couple of tools that you can use today to make cross platform mobile web development easier and then peer into the crystal ball to see what tools may bring in the future. Join us for Part 1 - as we take a look at a few boiler plates, frameworks and helpful libraries for building the mobile web. From: GoogleDevelopers Views: 0 0 ratings Time: 01:00:00 More in Science & Technology

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • Web Speech API franchit un nouveau cap, la spécification JavaScript permettra d'intégrer la reconnaissance vocale dans les pages Web

    La spécification Web Speech API franchit un nouveau cap l'API JavaScript permet d'intégrer la reconnaissance vocale dans les pages Web La spécification Web Speech API vient de franchir une étape importante dans sa normalisation. Le groupe de travail Web Speech API du W3C a récemment publié le futur standard avec un appel des membres pour un accord de la spécification finale. Cette spécification décrit une API JavaScript qui permettra aux développeurs d'intégrer la reconnaissance vocale dans les pages Web. Grâce à cette API, les développeurs pourront utiliser des Scripts pour générer du texte à partir des paroles, utiliser la reconnaissance vocale comme entrée pour l...

    Read the article

  • Ubuntu server and services

    - by Vicenç Gascó
    I've been using Linux+Plesk Virtual Server as a web server for a while, but I want to give a try on doing it manually, so my question is: I'll have a server which is: 80GB HDD, 4GB RAM, 1TB Bandwith, 1 Dedicated IP. And I use the following things on my Virtual nowadays: Mail server DNS server Apache + PHP 5.5 + MySQL FTP SSH My question is, without Plesk, can I achieve manually all those functionalities -know that I am not a terminal pro-, actually upgrading some of them to look like that with ubuntu server?: Mail server (with a nice webmail included) DNS server nginx + PHP 5.5 + MySQL + MongoDB FTP + SFTP SSH GIT Server Which ubuntu server should I chose? [EDIT] I almost forgot, I'd like to know how much Bandwith and CPU is using each of my webapps (one per domain usually), and the overall (not just from the webapps, but also mail, dns, etc...) ... usually Plesk does that for me, and I don't know how to measure that without it!

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • Delphi native Web Service applications not working. IIS 7 seems to be stripping the pathinfo

    - by Cary Jensen
    I've run into an interesting problem that I never encountered with XP or IIS 6. Basically, I can't get a native Delphi (WebBroker) Web service server to work with a native Web service client in Windows 7 64-bit. Here's the most basic breakdown. If I create a new Web service application in Delphi 2010 (or any version, back to Delphi 7), and access it using IE 8, I can see the HTML that the WSDLHTMLPublish component creates, but I can never get to the SOAP. In the same way, the WSDL Importer cannot get to the SOAP either. (I have IIS 7 configured to use a 32-bit application pool, and I have created a working Script Map to the Handler Mappings. In short, the 32-bit ISAPI Web service is running). For example, I have a simple Web service server named TestService (created using the default sample interface generated when you create a new Web service server). I installed it in a virtual directory named scripts. If I enter http://localhost/scripts/TestService.dll/wsdl, IIS 7 displays the page http://localhost/scripts/TestService.dll. If I put my mouse over the WSDL link for the ITestService, I see http://localhost/scripts/TestService.dll/wsdl/ITestService in the status bar. However, when I click this link, the address bar shows http://localhost/scripts/TestService.dll/wsdl/ITestService, but I see only the HTML from http://localhost/scripts/TestService.dll. There seems no way to get to the SOAP definition. IIS 7 seems to be ignoring everything after the script name (it is ignoring the pathinfo). Additional evidence that IIS7 is stripping off the pathinfo is that if I pause my mouse over the ITestService link, the statusbar shows http://localhost/scripts/TestService.dll?intf=ITestService. Clicking that link takes me to another HTML page, the one associated with http://localhost/scripts/TestService.dll?intf=ITestService. However, any link that includes a pathinfo following the script name, takes me simply to http://localhost/scripts/TestService.dll. I have tested this in Delphi 7, Delphi 2010, and Delphi XE, with the same results. I am guessing that IIS7 is stripping off the pathinfo, since even the WSDL Importer cannot get to the SOAP definition. Tried creating a new Web service using the CGI option, and got the same result. Have any idea what is going on? Added: Bob Swart reports he has had no problems under Windows 7 32-bit. Downloading the 32-bit OS and will try that (in a new VM).

    Read the article

  • How do I access SourceGear Web services using SOAP::Lite?

    - by user565793
    For some reason SourceGear provide an undocumented Web service on their installations. They actually ask developers to use the API instead because the Web service is kinda messy, but this is a problem in my case because I cannot use this API on a Perl environment, so their solution for my specific case is to use the Web service. This shouldn't be a problem. Using SOAP::Lite I have connected to several Web services in the past in the same way. But the lack of documentation is a major chaos if you don't know where the SOAP calls can be made. I only have an XML to decipher where to and how to make these calls. It would be great if a real SOAP genius could help me out in this. This is an example of the login call and the expected response: Request POST /fortress/dragnetwebservice.asmx HTTP/1.1 Host: velecloudserver Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://www.sourcegear.com/schemas/dragnet/LoginPlainText" <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <LoginPlainText xmlns="http://www.sourcegear.com/schemas/dragnet"> <strLogin>string</strLogin> <strPassword>string</strPassword> </LoginPlainText> </soap:Body> </soap:Envelope> Response HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <LoginPlainTextResponse xmlns="http://www.sourcegear.com/schemas/dragnet"> <LoginPlainTextResult>int</LoginPlainTextResult> <strAuthTicket>string</strAuthTicket> </LoginPlainTextResponse> </soap:Body> </soap:Envelope> I'm looking for a way to be able to assemble this somehow. This is my Perl example: my $soap = SOAP::Lite -> uri ('http://velecloudserver/fortress/dragnetwebservice.asmx') -> proxy('http://velecloudserver/fortress/dragnetwebservice.asmx/LoginPlainText'); my $som = $soap->call('LoginPlainText', SOAP::Data->name('LoginPlainText')->value( \SOAP::Data->value([ SOAP::Data->name('strLogin')->value( 'admin' ), SOAP::Data->name('strPassword')->value('Adm1234'), ])) ); Any tip would be appreciated.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >