Search Results

Search found 19305 results on 773 pages for 'above the gods'.

Page 248/773 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Simple monitoring utility with up/down statuses of the host's network connectivity and services

    - by Beaming Mel-Bin
    We've looked at many monitoring tools (SolarWinds, Zabbix, Nagios) through out the last 10 years but they never took hold because they are overly complicated. I am willing to try them again or something new at this point but with a much simpler goal: ping to check up and down of host tcp probes to test up and down of service notifications via e-mail web GUI prefer an OSS solution Wanted to know if someone has any recommendations on this. This could be a Windows or Linux application. Preferably without the reqirement of agents. I don't even need SNMP support but that may be nice for expanding once we have the above mentioned bare minimum in place.

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • Sudo yum seems to fail on CentOS, but works fine after sudo -i

    - by Aron Rotteveel
    I am currently having some trouble with yum through sudo. For some reason, it does not seem to work: aron@graviton [/var/log]# sudo yum clean all There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: /usr/lib64/python2.4/lib-dynload/datetime.so: failed to map segment from shared object: Cannot allocate memory Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.4.3 (#1, Sep 3 2009, 15:37:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] If you cannot solve this problem yourself, please go to the yum faq at: http://wiki.linux.duke.edu/YumFaq The strange thing, however, is that it works fine when I gain root privileges through sudo -i first. Any ideas what might be causing this problem?

    Read the article

  • Latency in TCP/IP-over-Ethernet networks

    - by aix
    What resources (books, Web pages etc) would you recommend that: explain the causes of latency in TCP/IP-over-Ethernet networks; mention tools for looking out for things that cause latency (e.g. certain entries in netstat -s); suggest ways to tweak the Linux TCP stack to reduce TCP latency (Nagle, socket buffers etc). The closest I am aware of is this document, but it's rather brief. Alternatively, you're welcome to answer the above questions directly. edit To be clear, the question isn't just about "abnormal" latency, but about latency in general. Additionally, it is specifically about TCP/IP-over-Ethernet and not about other protocols (even if they have better latency characteristics.)

    Read the article

  • StarterSTS 1.5

    - by Your DisplayName here!
    I have the 1.5 version of StarterSTS sitting here for quite some time now. But I was always reluctant to release it. Some of the reasons are: too many new features for a single (small) version change. to many features that are optional, like bridged authentication and thus make the code very complex. the way I implemented Azure integration adds a dependency on the Azure SDK, even for “on-premise” installations. I don’t like that. the fact I am using some WebForms bits and some WCF bits, the URL structure got messy. WebForms also don’t help a lot in testability All of the above reasons together plus the fact that I am the only architect, developer and tester on this project made me come to the conclusion that I will cancel this release. But wait… StarterSTS 1.5 is fully functional. We use both the on-premise and Azure versions internally “in production”. Cancelling means I will release the latest source code on Codeplex – but will not mark it as a “recommended release”. I also won’t produce updated screen casts and docs. Bu the setup is very similar to earlier versions. Feel free to use and customize 1.5 and give me feedback. On the good news front, I am working on a new version – welcome thinktecture IdentityServer. This version is based on MVC3 and the routing architecture, removed a lot of the clutter, has a SQL CE4 based configuration system, is more extensible – and in overall just cleaner. I will be able to upload CTPs very soon.

    Read the article

  • Where should I put bindings for dependency injection?

    - by Mike G
    I'm new to dependency injection and though I've really liked it so far, I'm not sure where bindings should go. I'm using Guice in Java, so some of what I say might be specific to just Guice. As I see it, there's two options: Accompanying the class(s) its needed for. Then, just write install(OtherClassModule.class) in whatever other modules want to be able to use said class. As I see it, the advantage of this is that classes that want to use it (or manage classes that want to use it) don't need to know any of the implementation detail. The issue I see is that what if two classes want to use two different versions of the same class? There's a lot of customization possible because of DI and this seems to restrict it a lot. Implemented in the module of the class(s) its needed for. It's the flip of what I said above. Now you have customization, but not encapsulation. Is there a third option? Am I misunderstanding something obvious? What's the best practice?

    Read the article

  • HTTPS To http redirect issue. How to overcome?

    - by Akshay
    Have already seen suggests on how to rewrite https to http. currently using this technique : RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^(.*)$ http://www.mysite.com/$1 [L,R=301] RewriteRule ^(.*)$ http ://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] Problem : I am currently on hostgator VPS and have found Google indexing my HTTPS pages. Weird for me as never bought an SSL. My site is a blog only. When spoke to Google forums, ( https://productforums.google.com/forum/#!msg/webmasters/2Hz46t44nwk/7voZWudFtAQJ ), they say I should redirect https to http. Now when I have redirected this using the above method, I am still getting SSL warning in browsers. And found that Google is still indexing my new pages with https. I feel as I do not have an SSL, adding a redirect in https doesn't work. So if Google indexes my https page, then I should go and buy SSL, and tell there to redirect https to http. Why would I do that? Please help me, reduced the traffic by nearly 30% because of this. Have even told search engines to go to this file (disallows everything) if they are on https. Options +FollowSymlinks RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^robots.txt$ robots_ssl.txt

    Read the article

  • Does openssl errno 104 mean that SSLv2 is disabled?

    - by David
    I want to check if my server has SSLv2 disabled. I am doing this by attempting to connect remotely with openssl with the following shell command. openssl s_client -connect HOSTNAME:443 -ssl2 Most literature I could find on the Internet says if I see something similar to the following error then SSLv2 is properly disabled. 29638:error:1407F0E5:SSL routines:SSL2_WRITE:ssl handshake failure:s2_pkt.c:428: I do get the above error when connecting to my Ubuntu server with SSLv2 disabled in Apache Apache but when I connect to my Windows Server 2008 R2 server with SSLv2 disabled in the registry I get the following output and error. CONNECTED(00000003) write:errno=104 I can't find any literature explaining this output and error. If anybody could explain to me if and why this output and error means that SSLv2 is properly disabled, I would appreciate it. Thanks!

    Read the article

  • Good SLA

    - by PointsToShare
    © 2011 Dov Trietsch What is a good SLA? I have frequently pondered about Service Level Agreements (SLA). Yesterday after ordering and while waiting, and waiting, and waiting for the food to arrive, I passed the time reading and re-reading the restaurant menu (again and again..) until I noticed their very interesting SLA.   Because (as promised) we had to wait even longer and the conversation around me was mostly in Russian, I ended doodling some of my thoughts of the menu, on the menu. People are both providers and consumers of services. As a service consumer – maybe the SLA above sucks – though to be honest, had the service been better, I would not have noticed this and you, the reader, would have been spared this rambling monograph. As a provider, I think it’s great! Because I provide services in the form of business software, I extend the idea to the following principles of design: 1: Wygiwyg. You guessed it. What You Get Is What You Get. 2: Ugiwugi.  U Get It When U Get It. How’s this for a developer friendly SLA? I’ll never be off the spec, or late. And BTW, the food was good, so when I finally got what I got, I liked it. That's All Folks!!

    Read the article

  • Clean URLS on Hiawatha

    - by Botto
    I am using the Hiawatha web server and running drupal on a FastCGI PHP server. The drupal site is using imagecache and it requires either private files or clean urls. The issue I am having with clean urls is that requests to files are being rewritten into index.php as well. My current config is: UrlToolkit { ToolkitID = drupal RequestURI exists Return Match (/files/*) Rewrite $1 Match ^/(.*) Rewrite /index.php?q=$1 } The above does not work. Drupal's apache set up is: <Directory /var/www/example.com> RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </Directory>

    Read the article

  • Interesting sessions/tips from RMOUG

    - by jean-pierre.dijcks
    One of the sessions I was at at last week's RMOUG was a session on Temp Tablespace Groups. I had a look because I had no experience with this and it seemed to help with parallel processing and the allocation/usage of temp. You can read the excellent write-up at Kellyn Pedersen's blog - who did the session and all the work - here. So for all of those who may be seeing lot's of waits like enq: TS - Contention when you are doing hash joins and sorts, do have a look at the above blog post. I also had the chance to listen in at Stewart Bryson's session on Restartability (he had 3 R-s) where he gave very useful tips about how to deal with your data warehouse loads. Questions like archive log mode - should I or should I not were well covered. Flashback archives, also nice to hear about. Very nice talk, very interesting. Unfortunately he hasn't blogged about it yes, so no pointers to that one. Got to see a couple of other interesting sessions, and as conferences go got to meet some interesting Oracle folks from the region. As usual RMOUG was useful and fun. Off to the drawing boards to design next year's session!

    Read the article

  • snmp trap using disman-event mib related issue

    - by jatin bodarya
    notificationEvent ifMtu.1 IF-MIB::ifMtu.1 1.3.6.1.2.1.2.2.1.4.1 monitor -I -u root -s -t -r 18 "Warn: High ipp Usage" -e ifMtu.1 1.3.6.1.2.1.2.2.1.4.1 != The above lines are in my snmpd.conf file which is generating a trap when the condition evaluates to false. My issue is that I want to send "Trap Severity Levels" with it. Is it possible? If so, how? If it isn't is there any other way to send them?

    Read the article

  • Issue in extending webapplication sharepoint

    - by GHIYA
    I have extended a webapplication in a farm. main server vsmoss1 where i did vsmoss1 ->webapplication(80) vm.com -> extended web app(of above one)anonymous WFE server name vsmoss2 WFE server name vsmoss3 i have load balanced it to got to vsmoss2 and vsmoss3 when someone hits vm.com when i hit vm.com it works fine without authentication(shows content query webpart also on my page) I know there is no need to do that but when I hit vsmoss2 and vsmoss3 it shows me error on my content query webpart ....any solution for that? Finding this strange tried this : I closed both extended webapp in vsmoss2 and vsmoss3 result: site is up and running but this time with authentication I closed both extended and main webapplication site in vsmoss2 and vsmoss3 is down I closed main webapplication in vsmoss2 and vsmoss3 site is up and running without authentication Anyone is having idea why this is showing behaviour like this...?

    Read the article

  • Oracle DB, Oracle ADF, GlassFish, JDeveloper, NetBeans IDE

    - by Geertjan
    Today I started some experiments with Oracle guru Steven Davelaar, who lives about 20 minutes away from my place in Amsterdam by underground. Very convenient. He showed me a bunch of things in JDeveloper, while I showed him a bunch of things in NetBeans IDE. He managed to deploy an ADF application to GlassFish in JDeveloper. And, so far, I failed to do the same thing in NetBeans IDE. Quite a few (around 100) JARs are needed, aside from the question of correctly setting up or importing an ADF application, and we're still figuring out which and who and when and where. And how. And if. And why. Nonetheless, I did manage to get Oracle DB set up in NetBeans IDE, after downloading it from here: http://www.oracle.com/technetwork/products/express-edition/downloads/index.html Here's what it looks like when registered in NetBeans IDE, i.e., notice that I have a cool sample database available:   Data from the above database I managed to display very easily via the various NetBeans code generators in a PrimeFaces application, exactly as has been done many times in demonstrations and tutorials everywhere, i.e., generate JPA entities, then create an EJB, then inject the EJB into a PrimeFaces data table: The next step is to somehow do the same with ADF in NetBeans IDE. I had some trouble with passwords for Oracle DB, the command line (with Steven's help) proved helpful: Wish us luck as we continue our ADF-inspired journey. This blog entry by Shay is also relevant: Deploying Oracle ADF Essentials Applications to Glassfish

    Read the article

  • RAM caching causes severe performance drops

    - by B T
    I have read plenty of threads on memory caching and the standard response of "large cache is good, it shouldn't effect performance", "the kernel knows best". I have recently upgraded from 12.04 to 12.10 and changed from VirtualBox to VMware Workstation and the performance differences are severe (I suspect it is because of the latter). When I am running my virtual machine the system load monitor graph shows less than 50% memory usage generally. System load indicator is showing me that the rest of my RAM is used in the cache all the time. Plain and simple this is the comparison: BEFORE Cache was very sparingly used, pretty much none of my memory usage was the cache Swappiness was 0 (caused my memory to be used first, then swap only if needed) Performance was quite good and logical RAM was used fully first, caching was minimal. I could run enough software to utilize my full 4GB of RAM without any performance degradation whatsoever Swap space was then used as needed which was obviously slower (I am on a HDD) but was still usable when the current program was loaded into memory AFTER Cache is used to fill the full 4GB as soon as my virtual machine is run Swappiness is 0 (same behaviour as before but cache uses full memory straight away) Performance is terrible and unusable while running Ubuntu software Basic things like changing windows takes 2 minutes + Changing screens happens frame by frame over sometimes up to 5 minutes Cannot run an IDE and VM like I could with ease before So basically, any suggestions on how to take my performance back to how it was before while keeping my current setup? My suspicion is VMWare is the problem, but how do I see what is tied to the use of the cache? Surely there is a way to control this behaviour in software as polished as VMware? Thanks EDIT: Could also be important to note that the behaviour differs depending on whether VMware is open or closed. If VMware is open, then the ram will lock at like 50% and 50% cache and go into the complete lock up mentioned above. Contrastingly, if VMware is closed (after being open), then the RAM will continue to rise as it needs / cache will stay as the complete remaining memory and there is no noticeable performance degradation.

    Read the article

  • AnyConnect SSL VPN split tunneling for a single website?

    - by Daniel Lucas
    We have a Cisco ASA 5510. We use split tunneling for AnyConnect SSL VPN clients. All internal addresses are tunnelled. Everything else is routed through the client's own internet connection. We use a SaaS service that only responds to requests when they come from one of our own public IP addresses. Because of this, VPN users are unable to access it currently. Is there a way to specify that a specific website should be tunneled and all others should not? NOTE: Worst case we will use a web bookmark on the clientless portal to tranlate through our network, but I'd like to see if the above is possible first.

    Read the article

  • How to enable extended logging for classic asp on IIS7 on Windows 2008 R2

    - by Neil Trodden
    I had to deploy an application that was not written by me onto the above configuration. It is a rather bizarre hybrid of asp.net and classic asp and it's the classic asp that is proving troublesome. The client is having problems with 500 Internal Server Errors appearing and I can see some of these in the logs but I only get the error code and the page name but little else. What I would like to see is the actual error message to at least give me an idea what is going on (or not going on, depending on your point of view) I don't want to display errors in the browser as I don't know the code well enough and this could (for all I know) display some crazy code where the db password is hard-coded into the site.

    Read the article

  • Set default expand/colapse state on pivot tables

    - by CLockeWork
    The Setup I have a pivot table in tabular form pulling data from an Analysis Services Cube. I want to calculate the number of days between two dates, but the setup will only allow me to pull in all date elements, not just the date. I’ve been able to deal with this easily enough by just grouping all the columns: The Problem The default state for the expand/collapse buttons in the image above is often collapsed, but that means the dates I need aren’t there and you have to open the group and manually expand them. This also happens in some random ways (as shown in the image) where only some rows expand. The Question I need a way to set these sections to always be expanded, so that the user never has to open the group to expand the rows. Ideally I’d like to avoid VBA because our end users often block it, but if that’s what’s needed then so be it. Is there a way to set my pivot table to never collapse it’s predefined groups? Note the end user is using Excel 2010

    Read the article

  • Controlling what data populates STAR

    - by user10747017
    Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above.

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

  • E-Business Suite - Cloning Basics & AMP Cloning - US

    - by Annemarie Provisero
    ADVISOR WEBCAST: E-Business Suite - Cloning Basics & AMP Cloning - US PRODUCT FAMILY: EBS – ATG - Utilities July 20, 2011 at 17:00 UK / 18:00 CET / 09:00 am Pacific / 10:00 am Mountain / 12:00 Eastern This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Cloning functionality available in the E-Business Suite Release. We are going to talk about the generic Cloning options and will then go into depth about the cloning scenario when using AMP (Applications Management Pack) within the Enterprise Manager. TOPICS WILL INCLUDE: Cloning Overview Rapidclone steps in Details Rapidclone limitations EM Grid Setup with AMP for Cloning Advantages of Cloning with AMP Cloning Procedures available with AMP Monitoring Clone Operation Few things to remember before Cloning A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • How to consolidate servers with the not-very-strong infrastructure

    - by Sim
    All, Situation We are in retail industry with about 10 distributors and use Solomon as the standard ERP for all our systems Each distributor has 1 HQ and 5 - 10 branches, each branch has their own server (Windows 2000/XP/2003 + Solomon + another built-in POS system) Everyday, branches has to extract data and send (via email/Skype) to HQ for data consolidation purpose When we first deployed our ERP, the infrastructure (e.g. Internet connection) wasn't reliable enough. That's why we went with the de-centralized model (each branch got their own server) Now, the infrastructure is mature already. And we need to consolidate data more quickly (not from branches -- HQ -- our company but something like HQ -- our company only) Goal We just have Solomon servers in distributor HQ. All the transactions in branches (retrieved from POS) will by synchronized with HQ server directly) There is a backup plan just in case the Internet goes down, or HQ server goes down Question With the above question, could you guys suggests some model for me ? Should we use Terminal services, any other solutions ? Any watchout/suggestions ? Any good article to read 'bout this ? Thanks a lot

    Read the article

  • E3 Booth Babes Display a Painful Lack of Video Game Knowledge [Video]

    - by Jason Fitzpatrick
    If you thought a prerequisite for manning a booth at an electronics expo was a passing knowledge of the electronics and games you were promoting, you were wrong. In the above video Chloe Dykstra puts a set of “booth babes” from the E3 2011 conference to the test by asking them simple questions about video games both new and old. If you’re a gaming fan and you can watch this video without laughing out loud you’ve got an iron will (or you’re shaking your head in disbelief that someone could work a gaming convention and not know the answers to these questions). We won’t lie, we were shaking our head when the one model admitted that she’d worked at GameStop for a year and still didn’t know any of the answers. What questions would you put on list? How about “Finish this sentence: ‘Your Princess is in another…’”, “Dimension?”. 5HP: Booth Babe Edition – E3 2011 [YouTube via Kotaku] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • Can a malicious hacker share Linux distributions which trust bad root certificates?

    - by iamrohitbanga
    Suppose a hacker launches a new Linux distro with firefox provided with it. Now a browser contains the certificates of the root certification authorities of PKI. Because firefox is a free browser anyone can package it with fake root certificates. Thus a fake root certificate would contain a the certification authority that is not actually certified. Can this be used to authenticate some websites. How? Many existing linux distros are mirrored by people. They can easily package software containing certificates that can lead to such attacks. Is the above possible? Has such an attack taken place before?

    Read the article

  • Displaying device contacts with an indication that the contact is registered to the app

    - by Prasanna Aarthi
    We are developing a mobile app that needs to pick up device contacts, display them and indicate if the contact has already registered with this app. We have our DB in the server and the app fetches data using web services. What will be the best approach to implement the above scenario taking performance into consideration. Option 1: Every time user opens the app,fetch the contacts and send the list of email addresses to the server, check with the registered email ids and return the list of registered users in the contact list. In this approach whenever user opens the particular page, he needs to wait for few seconds to load data, but the contacts will be the latest from the device. Option 2: First time when the user opens the app, fetch contacts ,send the entire list of contacts and save it in the DB, retrieve list of registered users in the contacts then save this to local DB. From now on, data will be fetched from local DB and displayed. When a new user registers in the app, again check with records in central DB and send list of new users who are in your contacts that have registered to your app. This list will be added to local DB. and the process continues. In this case the new contacts added by user will not be updated in the app but retrieval and display of records would be quick. What would be the correct approach? In case there is a better way of doing this, please let me know.

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >