Search Results

Search found 1081 results on 44 pages for 'migrating'.

Page 10/44 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Migrating Ajax web application to web socket

    - by Bastan
    Hi, I think i'm just missing a little detail that is preventing me from seeing the whole picture. I have a web application which use ajax request every x time to update client with new information or tasks. I also have a long running process on the server which is a java computation engine. I would like this engine to send update to the client. I am wondering how to migrate my web app to using websocket. Probably phpwebsocket or similar. Can my server 'decide' to send information to a specific client? It seems possible looking at the php-websocket. Can my java backend long process use the websocket server to send notification to a specific client. How? well I can say that my java app could use a class that could send over websocket instead of http. But how the websocket server knows to which client to send the 'info'. I am puzzle by all this. Any document that explain this in more details? It seems that the websocket could create an instance of my web application. Thanks

    Read the article

  • Migrating Application Configuration from Windows Registry to SQLite

    - by baris_a
    Currently, I am working on the migration mentioned in the title line. Problem is application configuration that is kept in registry has a tree like structure, for example: X |->Y |->Z |->SomeKey someValue W |->AnotherKey anotherValue and so on. How can I model this structure in SQLite (or any other DB)? If you have experience in similar problems, please send posts. Thanks in advance.

    Read the article

  • Migrating an Access Database into SharePoint 2007.

    - by Mike T
    To my surprise and delight I read that an adminsitrator can import (nearly directly) an Access 2007 database into a sharepoint site. Automagically, the database in transformed into lists and views with some table lookup thrown in for good measure. With Access 2007 installed on the client machine, even the forms and what not can still be reused. To me... this sounds to good to be true. Has anyone actually dones this? With all this good news, where is the bad stuff and pitfalls to this. Depending on the size of the database, wouldn't this some how "gum up the works" in the SharPoint database? Sources: http://madhurahuja.blogspot.com/2007/01/adding-data-to-sharepoint-l-ists-in.html http://social.technet.microsoft.com/Forums/en-US/sharepointadmin/thread/17745835-a861-4984-9f44-7291fdae7d07

    Read the article

  • resource for migrating from C/C++ to C#

    - by EquinoX
    I know there's a lot of resource for this via google, but I just wanted to hear personally from people who have experienced this before. I've programmed in C for 3 years and C++ for a year and now I am moving to C#. I know this is not going to be a so hard transition but could you guys that had this same experience with me share resources on a good book, article, or blog to make my study experience more efficient. Any tips/tricks or gotchas when moving to C#? Here's one article that I can find via google. Looking for more goodies from experienced developers here.

    Read the article

  • What do windows' users like most after migrating into Ubuntu?

    - by Bakhtiyor
    The title says for itself. There are a lot of interesting new features in Ubuntu. For example, after migrating into Ubuntu the most interesting feature for me was Centralized application installation via Synaptic (users do not need to search for an application, download it from somewhere, install it, and if it is pirated software to search keygens and stuffs like that). What else could be added to the list?

    Read the article

  • A little gem from MPN&ndash;FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform

    - by Eric Nelson
    I know a lot of technical people who work in partners (ISVs, System Integrators etc). I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc. I am one of those people :-) Hence imagine my surprise when i stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join). This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :) Course Structure The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process. Module 1:  Introduction:  This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers. Module 2:  Dynamic Environment: This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform. Module 3:  Local State: This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods. Module 4:  Latency and Timeouts: This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio. Module 5:  Transactions and Bandwidth: This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs. Module 6:  Authentication and Authorization: This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation. Module 7:  Data Sensitivity: This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud. Module 8:  Summary Provides an overall review of the course.

    Read the article

  • Does migrating a site that is 99% several megs of static HTML from Apache to Google App Enging make

    - by JonathanHayward
    I have a large site of mostly static content, and I have entertained migrating to Google App Engine. I am wondering, not so much if it is possible as whether that is cutting a steak with a screwdriver. I see a way to do it in Django that has a bad design smell. Does migrating a literature site that is largely static HTML from Apache to Google App Engine make sense? I'm not specifically asking for a comparison to Nginx or Cherokee; I am interested in whether migrating from a traditional web hosting solution to a more cloudy type of solution recommends itself. The site is JonathansCorner.com, and is presently unavailable ("the magic blue smoke has escaped").

    Read the article

  • Migrating to new dovecot server; Dovecot fails to authenticate using old password database

    - by Ironlenny
    I am migrating my companies intranet from a OS X server to an Ubuntu 12.04 server. We use a flat file to store user names and passwords hashs. This file is used by Apache and Dovecot to authenticate users. The Ubuntu server is running Dovecot 2.0 while the OS X server is running Dovecot 1.2. I have already migrated WebDav which uses Apache for authentication. Authentication works. I'm in the process of migrating our Prosody server which uses Dovecot for authentiation. Dovecot is up and running, but when I test authentication using either telnet a login username password or doveadm sudo doveadm auth username, I get dovecot: auth: passwd-file(username): unknown user dovecot: auth: Debug: client out: FAIL#0111#011user=username in my log file. I can use sudo dovecot user username to perform a user lookup and it will return the user's info. I can generate a password hash locally and Dovecot will authenticate the test password just fine.

    Read the article

  • Is there a better approach in migrating SIT SVN to UAT SVN?

    - by huahsin68
    In web development, given a same piece of source code, and being deploy to SIT (system integration testing) SVN/WAS and UAT (user acceptance testing) SVN/WAS. Please take note that I am using Jenkins to build everything. I have already ensured the transition from SIT SVN to UAT SVN are sync by doing manual diff on the 2 directory. Usually I will ensure the SIT WAS is working fine then only deploy to UAT WAS. But now there is a problem show up in UAT WAS and it is working fine in SIT WAS. I am suspecting there is a migration fault happened between SIT SVN to UAT SVN. In such a given scenario, is there a better approach to handle this problem?

    Read the article

  • Avoiding Object Oriented Pitfalls, Migrating from C, What Worked for You?

    - by Stephen
    I've been programming in procedural languages for quite some time now, and my first reaction to a problem is to start breaking it down into tasks to perform rather than to consider the different entities (objects) that exist and their relationships. I have had a university course in OOP, and understand the fundamentals of encapsulation, data abstraction, polymorphism, modularity and inheritance. I read Learning to think in the Object Oriented Way and Learning object oriented thinking, and will be looking at some of the books pointed to in those answers. I think that several of my medium to large sized projects will benefit from effective use of OOP but as a novice I would like to avoid time consuming, common errors. Based on your experiences, what are these pitfalls and what are reasonable ways around them? If you could explain why they are pitfalls, and how your suggestion is effective in addressing the issue it'd be appreciated. I'm thinking along the lines of something like "Is it common to have a fair number of observer and modifier methods and use private variables or are there techniques for consolidating/reducing them?" I'm not worried about using C++ as a pure OO language, if there are good reasons to mix methods. (Reminiscent of the reasons to use GOTOs, albeit sparingly.) Thank you!

    Read the article

  • Migration Guide: Migrating to SQL Server 2012 Failover Clustering and Availability Groups from Prior Clustering and Mirroring Deployments

    This paper provides guidance for customers who prior to SQL Server 2012 have deployed SQL Failover Clustering for local high availability and database mirroring for disaster recovery, and who want to migrate to SQL Server AlwaysOn. It describes the corresponding SQL Server AlwaysOn scenario and the migration paths to SQL Server AlwaysOn. It also contains the important knowledge and considerations that you must know in order to successfully migrate to a HADR solution based on SQL Server AlwaysOn technology, which implements AlwaysOn Failover Cluster Instances for high availability and AlwaysOn Availability Groups for disaster recovery.

    Read the article

  • Migrating to ssh key authentication; implications of adding sbin's to users $PATH

    - by ancillary
    I'm in the process of migrating to key's for authentication on my CentOS boxes. I have it all set up and working, but was a bit taken aback when I noticed service (and other things) didn't work the way I was accustomed to. Even after su'ing to root, still had to call the full path for it to work (which I assume to be expected/normal behavior). I also assume this is because there are different $PATH's for root (what I was using and am used to) and the newly created, key-using user. Specifically, I noticed the sbin's of the world missing from the user path. If I were to add those paths (/sbin/,/usr/sbin/,/usr/local/sbin) to a profile.d .sh script for this new key-loving user, would: I be opening up the system in ways I shouldn't be? I be doing something I needn't do save for reasons of laziness? I create other potential problems? Thanks.

    Read the article

  • Migrating Windows 2003 File Server Cluster to Windows 2008 R2 Standalone?

    - by Tatas
    We have a situation where we have an aging Windows 2003 File Server Cluster that we'd like to move to a standalone Windows Server 2008 R2 VM that resides in our Hyper-V R2 installation. We see no need to keep the Clustering as Hyper-V is now providing our Failover/Redundancy. Usually, in a standalone file server migration we migrate the data, preserving NTFS permissions and then export the sharing permissions from the registry and import them on the new server. This does not appear possible in this instance, as the 2003 cluster stores the sharing permissions quite differently. My question is, how would one perform this type of migration? Is it even possible? My current lead is the File Server Migration Toolkit, however I can find no information on the net about migrating from cluster to standalone, only the opposite. Please help. UPDATE: We ended up getting the data copied over (permissions intact), but had to recreate the shares manually by hand. It was a bit of a pain but it did in the end work out.

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • Migrating WebLogic 10.3.0 to new host. Slow managed server startup times

    - by wadevondoom
    We are migrating our Blue Martini Commerce application (only supported on WebLogic 10.3.0) to a new host (Redhat 6.3 on a VMWare ESX vm). We are seeing extremely slow start up times for our managed server(s) that is basically 20x slower than our current production. As a for instance the Publish managed server takes ~30 - 45 seconds in current production and in the new environment it takes ~10 minutes. The setup uses the same domain structure and JVM as the current production environment. The same setup files are used. We use jdk1.6.0_33 on 64 bit architecture. We used the generic 64bit weblogic installer and used pack / unpack utilities to migrate the domain. The JAVA_OPTS to start this server are: "-d64 -Xms256m -Xmx512m -XX:PermSize=48m -XX:MaxPermSize=256m" The sysadmins have checked /etc/sysctl.conf and /etc/limits.conf to ensure we were not hitting some kind of process limit. As I am not sure what this managed server does from a Blue Martini perspective during the phase of startup I also had the DBA check to ensure that Oracle RAC (11.2.0.3) wasn't also hitting some kind of process limit or if there was a tns listener issue. The new host is quite a bit stricter with their server lock downs so there are a few differences.... Redhat 6.3 in new env, RH 5.7 in current SElinux is targeted in new env and disabled in current VM in new env and dedicated hardware in current iptables disabled in current. It was enabled in new prod but I had them disable it just in case I apologize for not being more specific. I am mostly hoping got some tips. I do not have the typical root access I would normally have in this environment. I am just hoping got a path forward. I did a few 'kill -3' to see if there are blocked threads and I got nadda. The service works for all intents and purposes it is just painfully slow. Thanks you all in advance for reading and best regards. Wade

    Read the article

  • Public Folders - Delete Public Folders from 2003 after migrating to 2010 (via Adsiedit) - safe?

    - by HeavenCore
    Similar Question: How do I delete a public store in Exchange 2003? We are ready to remove our Exchange 2003 server after having migrated all public folders and mailboxes to 2010. We ran for a week with the exchange 2003 server shutdown and everything seemed to work. When I try to delete the PF database from 2003 it says it contains replicas. Whilst migrating i only had one was sync working (from 2003 to 2010) so i believe that 2003 hasn't received the responses from 2010 saying replica removed. When I look in Public folders on the 2003 box none are listed, when i look in PF Instances they are all listed. I know everything has moved to the 2010 server and I know 2010 is not showing the 2003 server as a replica for any folders. I am looking to use ADSI edit to remove the Public folder database from the 2003 server, but want to ensure i am going to delete the right thing so that they do not get deleted from the 2010 database. Should i delete configuration, Services, Microsoft Exchange, Company Name, Administrative groups, First administrative group, Servers, Server name, Information store, First storage group, public folder store (Server name)? Or something else? I have checked and the only public folder with the old exchange server listed as a replica is SYSTEM CONFIGURATION. Thanks in advance.

    Read the article

  • Benefits of migrating my work to a new web development framework?

    - by John
    When I first started programming with PHP, I was ignorant of other php frameworks (like code igniter, cake php, etc...). So I fell into the trap of re-inventing wheels, which had the benefit of being "fun" and "educational". Overtime, I discovered other open source products that I found useful, like smarty templating engine, jquery library, tcpdf library, fdf etc...so I started bundling these technologies along with things I've built over the years into a LAMP development framework to make life easier for myself. This pass year, I've been having fun developing on the code igniter framework. It does many of the things I do in my framework. Coding in CI feels natural because the MVC and ORM feels similar to the MVC and ORM of my framework. So now I'm contemplating migrating a lot of the plugins in my framework over to CI. The pros and cons I can think of for such a project are: Pros: benefit from the vast community of CI developers lots of other developers will be familiar with it better documentation Cons: I've built a lot of useful plugins against my own framework, and it will take a lot of time to move even just the essential ones at the moment, I still, work faster against my own framework than CI, just because I'm more familiar with it even if I did migrate to CI, there will always be newer and better frameworks in the near future, and i'll be contemplating this scenario again So my question is the following: perhaps I should leave my old framework as is, and for each new project I receive, I make a decision on whether the requirements are best served by developing with CI or my own framework. Is this the right approach?

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • Migrating from Apache2 to Lighttpd creating errors in PHP/mySQL?

    - by Jean-Philippe Murray
    Ok, I've been using basics ubuntu LAMP setups for years now, and I wanted to give lighttpd a try. My LAMP setup run in a virtual machine with scripts running just fine. So I created a new virtual machine, starting with a fresh install of ubuntu and made my setups. On this new VM, lighttpd + php works just fine. (Or at least it seems...) Problem occurs when I take the scripts from my LAMP setup and upload them to the new VM. I'm getting : Warning: mysql_real_escape_string(): Access denied for user 'www-data'@'localhost' (using password: NO) My lighttpd setup is configured as php-cgi but not my apache2 setup. Could this be the source of the problem? I think that scripts would be independent of the server configuration, so I doubt it. Also, I know that my DB connexion informations are good (as I can log in via phpmyadmin perfectly). I'm in the dark here, any pointers ? Thanks,

    Read the article

  • Migrating master-slave MySQL database servers to 2 new servers, any tips or suggestions?

    - by mmattax
    I'm setting up 2 new database servers that will be replacing a current master-slave setup. All boxes are running / will be running MySQL on RHEL. Our current naming conventions: db1 - master database db2 - slave (using MySQL replication) db01 - new master db02 - new slave We need to get db01 to be the new master with db02 as the new slave. What is the best way to migrate db1 and db2 to db01 and db02? db1 and db2 are running in a production setting and we need to minimize all downtime; db1 has roughly 30GB of data in the database. Any suggestions or tips on how to migrate to our new servers would be much appreciated.

    Read the article

  • what config files need to be transferred while migrating apache vhosts from old suse server to new suse server?

    - by jarus
    I have an old server with suse on it and its hosting numerous website under same IP , now i am trying to migrate the websites and all the contents of the old suse server to a new server with open suse 12.1 , i have transferred "/srv/www/vhosts" "/etc/apache2/vhosts.d" "/etc/apache2//httpd.conf" "/etc/apache2/listen.conf" "/etc/apache2/default-server.conf" i have transferred all the database files also . i am trying to replace the old server with the new server , i tried changing the ip address with the old server's ip address but its not working. what files do i need to transfer and what do i need to do to get the new server hosting the websites in place of the old server , please, any help will be greatly appreciated.

    Read the article

  • Mass-migrating from POP3 to Exchange 2010, how do I copy mailboxes?

    - by Erik P. Skaalerud
    I'm in the process of planning our migration from an internal hosted POP3-server (dovecot) to Exchange 2010. We're using Outlook 2003 for the moment, but will soon upgrade to Outlook 2010. The big problem is that we have about 50 computers here in our HQ, plus ~30 clients in branch offices (wich will get their Exchange migration later sometime). I'm the only IT personel, and having to go around and manually set up Outlook and copy over their PST contents is not a option I'm looking for. Some users have set outlook to keep messages for X number of days on the POP3 server, others have not. Using a POP3 connector to transfer over the mails is not a viable option. Here is what I've done so far: Created a transform for the Office 2003 administrative installation point Created a .PRF file to modify any existing e-mail account to switch over to Exchange (including the RPC-encrypt hotfix described in MSKB 2006508) Tested both transform and PRF, both works Created a test-OU and GPO containing the Office 2003 installation with transform applied, also works My big question is: How can I force Outlook to import any existing .PST into the new Exchange mailbox when the user starts up Outlook for the first time after the MST/PRF have been applied? Is this possible?

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >