Search Results

Search found 10256 results on 411 pages for 'elastic map reduce'.

Page 109/411 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Trouble with OpenLayers Styles.

    - by Jenny
    So, tired of always seeing the bright orange default regular polygons, I'm trying to learn to style OpenLayers. I've had some success with: var layer_style = OpenLayers.Util.extend({},OpenLayers.Feature.Vector.style['default']); layer_style.fillColor = "#000000"; layer_style.strokeColor = "#000000"; polygonLayer = new OpenLayers.Layer.Vector("PolygonLayer"); polygonLayer.style = layer_style; But sine I am drawing my polygons with DrawFeature, my style only takes effect once I've finished drawing, and seeing it snap from bright orange to grey is sort of disconcerting. So, I learned about temporary styles, and tried: var layer_style = new OpenLayers.Style({"default": {fillColor: "#000000"}, "temporary": {fillColor: "#000000"}}) polygonLayer = new OpenLayers.Layer.Vector("PolygonLayer"); polygonLayer.style = layer_style; This got me a still orange square--until I stopped drawing, when it snapped into completely opaque black. I figured maybe I had to explicitly set the fillOpacity...no dice. Even when I changed both fill colors to be pink and blue, respectively, I still saw only orange and opaque black. I've tried messing with StyleMaps, since I read that if you only add one style to a style map, it uses the default one for everything, including the temporary style. var layer_style = OpenLayers.Util.extend({}, OpenLayers.Feature.Vector.style['default']); var style_map = new OpenLayers.StyleMap(layer_style); polygonLayer = new OpenLayers.Layer.Vector("PolygonLayer"); polygonLayer.style = style_map; That got me the black opaque square, too. (Even though that layer style works when not given to a map). Passing the map to the layer itself like so: polygonLayer = new OpenLayers.Layer.Vector("PolygonLayer", style_map); Didn't get me anything at all. Orange all the way, even after drawn. polygonLayer = new OpenLayers.Layer.Vector("PolygonLayer", {styleMap: style_map}); Is a lot more succesful: Orange while drawing, translucent black with black outline when drawn. Just like when I didn't use a map. Problem is, still no temporary... So, I tried initializing my map this way: var style_map = new OpenLayers.StyleMap({"default": layer_style, "temporary": layer_style}); No opaque square, but no dice for the temporary, either... Still orange snapping to black transparent. Even if I make a new Style (layer_style2), and set temporary to that, still no luck. And no luck with setting "select" style, either. What am I doing wrong? Temporary IS for styling things that are currently being sketched, correct? Is there some other way specific to the drawFeature Controller? Edit: setting extendDefault to be true doesn't seem to help, either... var style_map = new OpenLayers.StyleMap({"default": layer_style, "temporary": layer_style}, {"extendDefault": "true"});

    Read the article

  • Android - Custom Icons in ListView

    - by Ryan
    Is there any way to place a custom icon for each group item? Like for phone I'd like to place a phone, for housing I'd like to place a house. Here is my code, but it keeps throwing a Warning and locks up on me. ListView myList = (ListView) findViewById(R.id.myList); //ExpandableListAdapter adapter = new MyExpandableListAdapter(data); List<Map<String, Object>> groupData = new ArrayList<Map<String, Object>>(); Iterator it = data.entrySet().iterator(); while (it.hasNext()) { //Get the key name and value for it Map.Entry pair = (Map.Entry)it.next(); String keyName = (String) pair.getKey(); String value = pair.getValue().toString(); //Add the parents -- aka main categories Map<String, Object> curGroupMap = new HashMap<String, Object>(); groupData.add(curGroupMap); if (value == "Phone") curGroupMap.put("ICON", findViewById(R.drawable.phone)); else if (value == "Housing") curGroupMap.put("NAME", keyName); curGroupMap.put("VALUE", value); } // Set up our adapter mAdapter = new SimpleAdapter( mContext, groupData, R.layout.exp_list_parent, new String[] { "ICON", "NAME", "VALUE" }, new int[] { R.id.iconImg, R.id.rowText1, R.id.rowText2 } ); myList.setAdapter(mAdapter); The error i'm getting: 05-28 17:36:21.738: WARN/System.err(494): java.io.IOException: Is a directory 05-28 17:36:21.809: WARN/System.err(494): at org.apache.harmony.luni.platform.OSFileSystem.readImpl(Native Method) 05-28 17:36:21.838: WARN/System.err(494): at org.apache.harmony.luni.platform.OSFileSystem.read(OSFileSystem.java:158) 05-28 17:36:21.851: WARN/System.err(494): at java.io.FileInputStream.read(FileInputStream.java:319) 05-28 17:36:21.879: WARN/System.err(494): at java.io.BufferedInputStream.fillbuf(BufferedInputStream.java:183) 05-28 17:36:21.908: WARN/System.err(494): at java.io.BufferedInputStream.read(BufferedInputStream.java:346) 05-28 17:36:21.918: WARN/System.err(494): at android.graphics.BitmapFactory.nativeDecodeStream(Native Method) 05-28 17:36:21.937: WARN/System.err(494): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:459) 05-28 17:36:21.948: WARN/System.err(494): at android.graphics.BitmapFactory.decodeFile(BitmapFactory.java:271) 05-28 17:36:21.958: WARN/System.err(494): at android.graphics.BitmapFactory.decodeFile(BitmapFactory.java:296) 05-28 17:36:21.978: WARN/System.err(494): at android.graphics.drawable.Drawable.createFromPath(Drawable.java:801) 05-28 17:36:21.988: WARN/System.err(494): at android.widget.ImageView.resolveUri(ImageView.java:501) 05-28 17:36:21.998: WARN/System.err(494): at android.widget.ImageView.setImageURI(ImageView.java:289) Thanks in advance for your help!!

    Read the article

  • Scipy Negative Distance? What?

    - by disappearedng
    I have a input file which are all floating point numbers to 4 decimal place. i.e. 13359 0.0000 0.0000 0.0001 0.0001 0.0002` 0.0003 0.0007 ... (the first is the id). My class uses the loadVectorsFromFile method which multiplies it by 10000 and then int() these numbers. On top of that, I also loop through each vector to ensure that there are no negative values inside. However, when I perform _hclustering, I am continually seeing the error, "Linkage Z contains negative values". I seriously think this is a bug because: I checked my values, the values are no where small enough or big enough to approach the limits of the floating point numbers and the formula that I used to derive the values in the file uses absolute value (my input is DEFINITELY right). Can someone enligten me as to why I am seeing this weird error? What is going on that is causing this negative distance error? ===== def loadVectorsFromFile(self, limit, loc, assertAllPositive=True, inflate=True): """Inflate to prevent "negative" distance, we use 4 decimal points, so *10000 """ vectors = {} self.winfo("Each vector is set to have %d limit in length" % limit) with open( loc ) as inf: for line in filter(None, inf.read().split('\n')): l = line.split('\t') if limit: scores = map(float, l[1:limit+1]) else: scores = map(float, l[1:]) if inflate: vectors[ l[0]] = map( lambda x: int(x*10000), scores) #int might save space else: vectors[ l[0]] = scores if assertAllPositive: #Assert that it has no negative value for dirID, l in vectors.iteritems(): if reduce(operator.or_, map( lambda x: x < 0, l)): self.werror( "Vector %s has negative values!" % dirID) return vectors def main( self, inputDir, outputDir, limit=0, inFname="data.vectors.all", mappingFname='all.id.features.group.intermediate'): """ Loads vector from a file and start clustering INPUT vectors is { featureID: tfidfVector (list), } """ IDFeatureDic = loadIdFeatureGroupDicFromIntermediate( pjoin(self.configDir, mappingFname)) if not os.path.exists(outputDir): os.makedirs(outputDir) vectors = self.loadVectorsFromFile( limit, pjoin( inputDir, inFname)) for threshold in map( lambda x:float(x)/30, range(20,30)): clusters = self._hclustering(threshold, vectors) if clusters: outputLoc = pjoin(outputDir, "threshold.%s.result" % str(threshold)) with open(outputLoc, 'w') as outf: for clusterNo, cluster in clusters.iteritems(): outf.write('%s\n' % str(clusterNo)) for featureID in cluster: feature, group = IDFeatureDic[featureID] outline = "%s\t%s\n" % (feature, group) outf.write(outline.encode('utf-8')) outf.write("\n") else: continue def _hclustering(self, threshold, vectors): """function which you should call to vary the threshold vectors: { featureID: [ tfidf scores, tfidf score, .. ] """ clusters = defaultdict(list) if len(vectors) > 1: try: results = hierarchy.fclusterdata( vectors.values(), threshold, metric='cosine') except ValueError, e: self.werror("_hclustering: %s" % str(e)) return False for i, featureID in enumerate( vectors.keys()):

    Read the article

  • Mapstraction: Changing an Icon's image URL after it has been added?

    - by Paul Owens
    I am trying to use marker.setIcon() to change a markers image. However it appears that although this changes the marker.iconUrl attribute the icon itself is using marker.proprietary_marker.$.icon.image to display the markers image - so the markers icon remains unchanged. Is there a way to dynamically change the marker.proprietary_marker.$.icon.image? Add a marker. Check the icon's image url and the proprietary icon's image - they're the same. Change the icon. Again check the Urls. Now the Icon Url has changed but the marker still shows the old image which is in the proprietary marker object. <head> <title>Map Test</title> <script src="http://maps.google.com/maps?file=api&v=2&key=Your-Google-API-Key" type="text/javascript"></script> <script src="mapstraction.js"></script> <script type="text/javascript"> var map; var marker; function getMap(){ map = new mxn.Mapstraction('myMap','google'); map.setCenterAndZoom(new mxn.LatLonPoint(45.559242,-122.636467), 15); } function addMarker(){ marker = new mxn.Marker(new mxn.LatLonPoint(45.559242, -122.636467)); marker.addData({infoBubble : "Text", label : "Label", marker : 4, icon: "http://mapscripting.com/examples/mashups/richter-high.png"}); map.addMarker(marker); } function changeIcon(){ marker.setIcon("http://assets1.mapufacture.com/images/markers/usgs_marker.png"); } function showIconURL(){ alert(marker.iconUrl); } function showProprietaryIconURL(){ alert(marker.proprietary_marker.$.icon.image); } </script> </head> <body onload="getMap()"> <div id="myMap" style="width:627px; height:412px;"></div> <div> <input type="button" value="add marker" OnClick="addMarker();"> <input type="button" value="change icon" OnClick="changeIcon();"> <input type="button" value="show icon URL" OnClick="showIconURL();"> <input type="button" value="show proprierty icon URL " OnClick="showProprietaryIconURL();"> </div> </body> </html>

    Read the article

  • Android - MapView contained within a Listview

    - by Ryan
    Hello, Currently I am trying to place a MapView within a ListView. Has anyone had any success with this? Is it even possible? Here is my code: ListView myList = (ListView) findViewById(android.R.id.list); List<Map<String, Object>> groupData = new ArrayList<Map<String, Object>>(); Map<String, Object> curGroupMap = new HashMap<String, Object>(); groupData.add(curGroupMap); curGroupMap.put("ICON", R.drawable.back_icon); curGroupMap.put("NAME","Go Back"); curGroupMap.put("VALUE","By clicking here"); Iterator it = data.entrySet().iterator(); while (it.hasNext()) { //Get the key name and value for it Map.Entry pair = (Map.Entry)it.next(); String keyName = (String) pair.getKey(); String value = pair.getValue().toString(); if (value != null) { //Add the parents -- aka main categories curGroupMap = new HashMap<String, Object>(); groupData.add(curGroupMap); //Push the correct Icon if (keyName.equalsIgnoreCase("Phone")) curGroupMap.put("ICON", R.drawable.phone_icon); else if (keyName.equalsIgnoreCase("Housing")) curGroupMap.put("ICON", R.drawable.house_icon); else if (keyName.equalsIgnoreCase("Website")) curGroupMap.put("ICON", R.drawable.web_icon); else if (keyName.equalsIgnoreCase("Area Snapshot")) curGroupMap.put("ICON", R.drawable.camera_icon); else if (keyName.equalsIgnoreCase("Overview")) curGroupMap.put("ICON", R.drawable.overview_icon); else if (keyName.equalsIgnoreCase("Location")) curGroupMap.put("ICON", R.drawable.map_icon); else curGroupMap.put("ICON", R.drawable.icon); //Pop on the Name and Value curGroupMap.put("NAME", keyName); curGroupMap.put("VALUE", value); } } curGroupMap = new HashMap<String, Object>(); groupData.add(curGroupMap); curGroupMap.put("ICON", R.drawable.back_icon); curGroupMap.put("NAME","Go Back"); curGroupMap.put("VALUE","By clicking here"); //Set up adapter mAdapter = new SimpleAdapter( mContext, groupData, R.layout.exp_list_parent, new String[] { "ICON", "NAME", "VALUE" }, new int[] { R.id.photoAlbumImg, R.id.rowText1, R.id.rowText2 } ); myList.setAdapter(mAdapter); //Bind the adapter to the list Thanks in advance for your help!!

    Read the article

  • plot markers on google maps with json and jquery

    - by mark
    I am trying to plot the markers as defined in a json file om Google Maps but they don't show on the map. Can somebody help me with this problem? This is the Json file: http://sionvalais.com/gmap/markers/ This is the Javascritp function: function loadMarkers() { var bounds = map.getBounds(); var zoomLevel = map.getZoom(); $.post("/gmaps/markers/index.php", {zoom: zoomLevel, swLat: bounds.getSouthWest().lat(), swLon: bounds.getSouthWest().lng(), neLat: bounds.getNorthEast().lat(), neLon: bounds.getNorthEast().lng()}, function(data) { processMarkers(data, _smallMarkerSize); }, "json" ); } function processMarkers(webcams, markerSize) { var marker = null; var markersInView = new Array(); var idsInView = new Array(); // Loop through the new webcams for (var i = 0; i < webcams.length; i++) { var idx = markers.indexOf(webcams[i].id); if (idx == -1) { var info_html = "<table class='infowindow'>"; info_html += "<tr><td class='img'>"; info_html += "<img src='" + webcams[i].smallimg + "' /><td>"; info_html += "<td><p><b>" + webcams[i].loc + "</b>"; info_html += "<br /><a href='/webcam/" + webcams[i].url + "' target='_blank'>Show webcam</a></p></td></tr>"; info_html += "</table>"; marker = new WebcamMarker(new GLatLng(webcams[i].latitude, webcams[i].longitude), {image: "" + webcams[i].smallimg + "", height: markerSize, width: markerSize}); marker.myhtml = info_html; map.addOverlay(marker); markersInView[webcams[i].id] = marker; } else { markersInView[webcams[i].id] = markers[webcams[i].id]; } idsInView.push(webcams[i].id); } // Now remove the markers outside of the viewport for (var i = 0; i < webcamids.length; i++) { var idx = markersInView.indexOf(webcamids[i]); if (idx == -1) { marker = markers[webcamids[i]]; map.removeOverlay(marker); } } markers = markersInView; webcamids = idsInView; }

    Read the article

  • Destroyed user account on OS X with dscl; how to restore? [migrated]

    - by Sam Ritchie
    I was trying to create a new user on my OS X Lion machine, and somehow managed to destroy my own user's account. Here are the steps I took; hopefully someone here can recognize what I did, and maybe identify some way around this. First, I ran these commands: sudo dscl localhost -create /Local/Default/Users/elasticsearch sudo dscl localhost -create /Local/Default/Users/elasticsearch /bin/bash # mistake! sudo dscl localhost -create /Local/Default/Users/elasticsearch UserShell /bin/bash sudo dscl localhost -create /Local/Default/Users/elasticsearch RealName "Elastic Search" sudo dscl localhost -create /Local/Default/Users/elasticsearch UniqueID 503 # MY uniqueID sudo dscl localhost -create /Local/Default/Users/elasticsearch PrimaryGroupID 1000 sudo dscl localhost -create /Local/Default/Users/elasticsearch NFSHomeDirectory /Local/Users/elasticsearch The big mistake I made here was using "503", which was my user's UniqueID. Immediately my shell username changed to "elasticsearch". I fiddled around, tried to change the current user with sudo su -u sritchie, but this didn't work. On restart, only the "Elastic Search" user was available. I logged into the Lion Recovery partition and reset the root password. After logging in as root and checking on the terminal, I made the remarkable discovery that my home folder was totally empty. I deleted the elasticsearch user, but it made no difference. I don't see anything in Deleted Users either. The odd thing is that when I log in now as myself (sritchie) I can see desktop icons with previews. I can even open a few text files from the Downloads folder if I use the dock alias to Downloads. Could this data be hiding somewhere? Any help would be REALLY appreciated! Thanks, Sam

    Read the article

  • Combinations and Permutations in F#

    - by Noldorin
    I've recently written the following combinations and permutations functions for an F# project, but I'm quite aware they're far from optimised. /// Rotates a list by one place forward. let rotate lst = List.tail lst @ [List.head lst] /// Gets all rotations of a list. let getRotations lst = let rec getAll lst i = if i = 0 then [] else lst :: (getAll (rotate lst) (i - 1)) getAll lst (List.length lst) /// Gets all permutations (without repetition) of specified length from a list. let rec getPerms n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, _ -> lst |> getRotations |> Seq.collect (fun r -> Seq.map ((@) [List.head r]) (getPerms (k - 1) (List.tail r))) /// Gets all permutations (with repetition) of specified length from a list. let rec getPermsWithRep n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, _ -> lst |> Seq.collect (fun x -> Seq.map ((@) [x]) (getPermsWithRep (k - 1) lst)) // equivalent: | k, _ -> lst |> getRotations |> Seq.collect (fun r -> List.map ((@) [List.head r]) (getPermsWithRep (k - 1) r)) /// Gets all combinations (without repetition) of specified length from a list. let rec getCombs n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, (x :: xs) -> Seq.append (Seq.map ((@) [x]) (getCombs (k - 1) xs)) (getCombs k xs) /// Gets all combinations (with repetition) of specified length from a list. let rec getCombsWithRep n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, (x :: xs) -> Seq.append (Seq.map ((@) [x]) (getCombsWithRep (k - 1) lst)) (getCombsWithRep k xs) Does anyone have any suggestions for how these functions (algorithms) can be sped up? I'm particularly interested in how the permutation (with and without repetition) ones can be improved. The business involving rotations of lists doesn't look too efficient to me in retrospect. Update Here's my new implementation for the getPerms function, inspired by Tomas's answer. Unfortunately, it's not really any fast than the existing one. Suggestions? let getPerms n lst = let rec getPermsImpl acc n lst = seq { match n, lst with | k, x :: xs -> if k > 0 then for r in getRotations lst do yield! getPermsImpl (List.head r :: acc) (k - 1) (List.tail r) if k >= 0 then yield! getPermsImpl acc k [] | 0, [] -> yield acc | _, [] -> () } getPermsImpl List.empty n lst

    Read the article

  • Dynamic mass hosting using mod_wsgi

    - by Virgil Balibanu
    Hi, I am trying to configure an apache server using mod_wsgi for dynamic mass hosting. Each user will have it's own instance of a python application located in /mnt/data/www/domains/[user_name] and there will be a vhost.map telling me which domain maps to each user's directory (the directory will have the same name as the user). What i do not know is how to write the WSGIScriptAliasMatch line so that it also takes the path from the vhost.map file. What i want to do is something like this: I can have on my server different domains like www.virgilbalibanu.com or virgil.balibanu.com and flaviu.balibanu.com where each domain would belog to another user, the user name having no neccesary connection to the domain name. I want to do this beacuse a user, wehn he makes an acoount receives something like virgil.mydomain.com but if he has his own domain he can change it later to that, for example www.virgilbalibanu.ro, and this way I would only need to chenage the line in the vhost.map file So far I have something like this: Alias /media/ /mnt/data/www/iitcms/media/ #all media is taken from here RewriteEngine on RewriteMap lowercase int:tolower # define the map file RewriteMap vhost txt:/mnt/data/www/domains/vhost.map #this does not work either, can;t say why atm RewriteCond %{REQUEST_URI} ^/uploads/ RewriteCond ${lowercase:%{SERVER_NAME}} ^(.+)$ RewriteCond ${vhost:%1} ^(/.*)$ RewriteRule ^/(.*)$ %1/media/uploads/$1 #---> this I have no ideea how i could do WSGIScriptAliasMatch ^([^/]+) /mnt/data/www/domains/$1/apache/django.wsgi <Directory "/mnt/data/www/domains"> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <DirectoryMatch ^/mnt/data/www/domains/([^/]+)/apache> AllowOverride None Options FollowSymLinks ExecCGI Order deny,allow Allow from all </DirectoryMatch> <Directory /mnt/data/www/iitcms/media> AllowOverride None Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> <DirectoryMatch ^/mnt/data/www/domains/([^/]+)/media/uploads> AllowOverride None Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </DirectoryMatch> I know the part i did with mod_rewrite doesn't work, couldn't really say why not but that's not as important so far, I am curious how could i write the WSGIScriptAliasMatch line so that to accomplish my objective. I would be very grateful for any help, or any other ideas related to how i can deal with this. Also it would be great if I'd manage to get each site to run in wsgi daemon mode, thou that is not as important. Thanks, Virgil

    Read the article

  • Javascript and the Google Maps API

    - by Tiny Giant Studios
    Hiya coding Ninja's I'm in a spot of bother and my hairline is on the chopping block. When I integrated the maps API on this site, ritaknoetze.com, everything worked perfectly. However, copying that exact code for a different demo website, scarabpaper, the map doesn't show up at all? Could someone show me the ropes on what I'm doing wrong? Here's the code I got from Google itself that I modified for my WordPress theme/installation: JavaScript: <meta name="viewport" content="initial-scale=1.0, user-scalable=no" /> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"></script> <script type="text/javascript"> function initialize() { var myLatlng = new google.maps.LatLng(-34.009839, 22.78101); var myOptions = { zoom: 9, center: myLatlng, navigationControl: true, mapTypeControl: false, scaleControl: false, mapTypeId: google.maps.MapTypeId.ROADMAP } var map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); var image = '<?php bloginfo('template_url')?>/assets/googlemaps_marker.png'; var myLatLng = new google.maps.LatLng(-34.009839, 22.78101); var beachMarker = new google.maps.Marker({ position: myLatLng, map: map, icon: image }); } </script> My HTML where the javascript goes: <div class="contact_container"> <div id="map_canvas"></div> <div class="clearfloat"></div> </div> My CSS for the affected divs #map_canvas { width: 880px; height: 300px; margin-left: 10px; margin-bottom: 30px; margin-top: 10px; float: left; border: 1px solid #dedcdc;} .contact_container { /*container for ALL the contact info*/ background-color: #fff; border: 1px solid #dedcdc; width: 900px; margin-top: 30px; padding: 20px; padding-bottom: 0;} Any Help would be greatly appreciated...

    Read the article

  • Which AMI to to use for Java/Tomcat/MySQL in Amazon EC2?

    - by Justin
    I originally posted this on stackoverflow.com and it was suggested serverfault.com might be a better place to ask this question. So here goes: I'm trying to determine which Amazon Machine Image (AMI) to use as my Virtual Server in Amazon's EC2. For now, I'll need to choose an AMI that complies with the AWS Free Usage Tier. I want to deploy a Java app that I've been developing using Eclipse on Windows XP, Tomcat 7 and MySQL 5.5. I'm aware that I can choose the Basic 32-bit Amazon Linux AMI. Then I'd manually install Tomcat and MySQL (does MySQL get installed on the image or separately on an Elastic Block Store (EBS)?). Here's the rub, I'm a bit of a Linux noob. I can start Tomcat and tail the logs and such on Linux but I'm not familiar with the install process for Tomcat and MySQL on Linux and commands like sudo and chmod. I'm happy to get more hands on with Linux but I'm short on time right now. Are there AMI's that already have Tomcat and MySQL bundled? The Request Instance Wizard shows 805 Community AMI's that are Free Tier Eligible. 51 of the Free Tier Eligible AMI's have "Tomcat" in their name. I'm willing to consider using Elastic Beanstalk but my research thus far hasn't found any discussion of using MySQL with Beanstalk. The discussions all seem to use Amazon's SimpleDB. Any advice is greatly appreciated.

    Read the article

  • Help with C# program design implementation: multiple array of lists or a better way?

    - by Bob
    I'm creating a 2D tile-based RPG in XNA and am in the initial design phase. I was thinking of how I want my tile engine to work and came up with a rough sketch. Basically I want a grid of tiles, but at each tile location I want to be able to add more than one tile and have an offset. I'd like this so that I could do something like add individual trees on the world map to give more flair. Or set bottles on a bar in some town without having to draw a bunch of different bar tiles with varying bottles. But maybe my reach is greater than my grasp. I went to implement the idea and had something like this in my Map object: List<Tile>[,] Grid; But then I thought about it. Let's say I had a world map of 200x200, which would actually be pretty small as far as RPGs go. That would amount to 40,000 Lists. To my mind I think there has to be a better way. Now this IS pre-mature optimization. I don't know if the way I happen to design my maps and game will be able to handle this, but it seems needlessly inefficient and something that could creep up if my game gets more complex. One idea I have is to make the offset and the multiple tiles optional so that I'm only paying for them when needed. But I'm not sure how I'd do this. A multiple array of objects? object[,] Grid; So here's my criteria: A 2D grid of tile locations Each tile location has a minimum of 1 tile, but can optionally have more Each extra tile can optionally have an x and y offset for pinpoint placement Can anyone help with some ideas for implementing such a design (don't need it done for me, just ideas) while keeping memory usage to a minimum? If you need more background here's roughly what my Map and Tile objects amount to: public struct Map { public Texture2D Texture; public List<Rectangle> Sources; //Source Rectangles for where in Texture to get the sprite public List<Tile>[,] Grid; } public struct Tile { public int Index; //Where in Sources to find the source Rectangle public int X, Y; //Optional offsets }

    Read the article

  • Get latlng and draw a polyline between that two latlng

    - by anup sharma
    I have some issue in my code in that first I want to get latlng of my given address/city name from two text boxes after that i converts it to from position's latlng and to position's latlng and at last i want to draw a polyline between these point and markers on both point also, But now i am trying to draw a line between these points. But still not working this code also no any error in console also. code is here for your help. function getRoute(){ var from_text = document.getElementById("travelFrom").value; var to_text = document.getElementById("travelTo").value; if(from_text == ""){ alert("Enter travel from field") document.getElementById("travelFrom").focus(); } else if(to_text == ""){ alert("Enter travel to field"); document.getElementById("travelTo").focus(); } else{ //google.maps.event.addListener(map, "", function (e) { var myLatLng = new google.maps.LatLng(28.6667, 77.2167); var mapOptions = { zoom: 3, center: myLatLng, mapTypeId: google.maps.MapTypeId.TERRAIN }; var map = new google.maps.Map(document.getElementById('map-canvas'), mapOptions); var geocoder = new google.maps.Geocoder(); var address1 = from_text; var address2 = to_text; var from_latlng,to_latlng; //var prepath = path; //if(prepath){ // prepath.setMap(null); //} geocoder.geocode( { 'address': address1}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { // do something with the geocoded result // alert(results[0].geometry.location); from_latlng = results[0].geometry.location; // from_lat = results[0].geometry.location.latitude; // from_lng = results[0].geometry.location.longitude; // alert(from_latlng); } }); geocoder.geocode( { 'address': address2}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { // do something with the geocoded result to_latlng = results[0].geometry.location; // to_lat = results[0].geometry.location.latitude; // to_lng = results[0].geometry.location.longitude; // results[0].geometry.location.longitude // alert(to_latlng) } }); setTimeout(function(){ var flightPlanCoordinates = [ new google.maps.LatLng(from_latlng), new google.maps.LatLng(to_latlng) ]; //alert("123") var polyline; polyline = new google.maps.Polyline({ path: flightPlanCoordinates, strokeColor: "#FF0000", strokeOpacity: 1.0, strokeWeight: 2 }); polyline.setMap(map); // assign to global var path // path = polyline; },4000); // }); } }

    Read the article

  • ADO Exception in HQL query

    - by Yoav
    I have 2 classes: Project and DataStructure. Class Project contains member List<DataStructure. My goal is to load a Project and all its DataStructures in one call. public class Project { public virtual string Id { get { } set { } } public virtual string Name { get { } set { } } public virtual ISet<DataStructure> DataStructures { get { } set { } } } public class DataStructure { public virtual string Id { get { } set { } } public virtual string Name { get { } set { } } public virtual string Description { get { } set { } } public virtual Project Project { get { } set { } } public virtual IList<DataField> Fields { get { } set { } } } Note that DataStructure also contains a list of class DataField but I don’t want to load these right now. Mapping in Fluent NHibernate: public class ProjectMap : ClassMap<Project> { public ProjectMap() { Table("PROJECTS"); Id(x => x.Pk, "PK"); Map(x => x.Id, "ID"); Map(x => x.Name, "NAME"); HasMany<DataStructure>(x => x.DataStructures).KeyColumn("FK_PROJECT"); } } public class DataStructureMap : ClassMap<DataStructure> { public DataStructureMap() { Table("DATA_STRUCTURES"); Map(x => x.Id, "ID"); Map(x => x.Name, "NAME"); Map(x => x.Description, "DESCRIPTION"); References<Project>(x => x.Project, "FK_PROJECT"); HasMany<DataField>(x => x.Fields).KeyColumn("FK_DATA_STRUCTURE"); } } This is my query: using (ISession session = SessionFactory.OpenSession()) { IQuery query = session.CreateQuery("from Project pr left join pr.DataStructure"); project = query.List<Project>(); } query.List() returns this exception: NHibernate.Exceptions.GenericADOException: Could not execute query[SQL: SQL not available] ---> System.ArgumentException: The value "System.Object[]" is not of type "Project" and cannot be used in this generic collection.

    Read the article

  • JComobox is not showing in the JDialog

    - by Pujan Srivastava
    I have 2 classes. when I put bold 3 lines in the method addCourses() the dialog does not show combobox in the Panel but when I remove from addCourses and put those bold lines in the constructor, JComboBox are shown in the Panel. But data will not show because data items updates to ComboBox will happen after Constructor is created. How can I solve this problem. this.mainPanel.add(courseCombo, BorderLayout.NORTH); this.mainPanel.add(sessionCombo, BorderLayout.CENTER); this.mainPanel.add(courseButton, BorderLayout.SOUTH); public class Updator { CourseListFrame clf = new CourseListFrame(); for(...){ clf.addContentsToBox(displayName, className); } clf.addCourses(); } and second class is public class CourseListFrame extends JDialog implements ActionListener { public JPanel mainPanel = new JPanel(new BorderLayout(2, 2)); public JButton courseButton = new JButton(("Submit")); public JComboBox courseCombo; public JComboBox sessionCombo; public Multimap<String, String> map; // = HashMultimap.create(); public static CourseListFrame courseListDialog; public CourseListFrame() { super(this.getMainFrame()); this.getContentPane().add(mainPanel); map = HashMultimap.create(); courseCombo = new JComboBox(); courseCombo.addItem("Select Courses"); courseCombo.addActionListener(this); sessionCombo = new JComboBox(); } public void addContentsToBox(String course, String session) { map.put(course, session); courseCombo.addItem(course); } public void actionPerformed(ActionEvent e) { JComboBox cb = (JComboBox) e.getSource(); String str = (String) cb.getSelectedItem(); setSessionCombo(str); } public void setSessionCombo(String course) { if (map.containsKey(course)) { sessionCombo.removeAllItems(); Iterator it = map.get(course).iterator(); while (it.hasNext()) { sessionCombo.addItem(it.next()); } } } public void addCourses() { this.mainPanel.add(courseCombo, BorderLayout.NORTH); this.mainPanel.add(sessionCombo, BorderLayout.CENTER); this.mainPanel.add(courseButton, BorderLayout.SOUTH); } public static void showCourseListDialog() { if (courseListDialog == null) { courseListDialog = new CourseListFrame(); } courseListDialog.pack(); courseListDialog.setVisible(true); courseListDialog.setSize(260, 180); } }

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

  • JPA Database strcture for internationalisation

    - by IrishDubGuy
    I am trying to get a JPA implementation of a simple approach to internationalisation. I want to have a table of translated strings that I can reference in multiple fields in multiple tables. So all text occurrences in all tables will be replaced by a reference to the translated strings table. In combination with a language id, this would give a unique row in the translated strings table for that particular field. For example, consider a schema that has entities Course and Module as follows :- Course int course_id, int name, int description Module int module_id, int name The course.name, course.description and module.name are all referencing the id field of the translated strings table :- TranslatedString int id, String lang, String content That all seems simple enough. I get one table for all strings that could be internationalised and that table is used across all the other tables. How might I do this in JPA, using eclipselink 2.4? I've looked at embedded ElementCollection, ala this... JPA 2.0: Mapping a Map - it isn't exactly what i'm after cos it looks like it is relating the translated strings table to the pk of the owning table. This means I can only have one translatable string field per entity (unless I add new join columns into the translatable strings table, which defeats the point, its the opposite of what I am trying to do). I'm also not clear on how this would work across entites, presumably the id of each entity would have to use a database wide sequence to ensure uniqueness of the translatable strings table. BTW, I tried the example as laid out in that link and it didn't work for me - as soon as the entity had a localizedString map added, persisting it caused the client side to bomb but no obvious error on the server side and nothing persisted in the DB :S I been around the houses on this about 9 hours so far, I've looked at this Internationalization with Hibernate which appears to be trying to do the same thing as the link above (without the table definitions it hard to see what he achieved). Any help would be gratefully achieved at this point... Edit 1 - re AMS anwser below, I'm not sure that really addresses the issue. In his example it leaves the storing of the description text to some other process. The idea of this type of approach is that the entity object takes the text and locale and this (somehow!) ends up in the translatable strings table. In the first link I gave, the guy is attempting to do this by using an embedded map, which I feel is the right approach. His way though has two issues - one it doesn't seem to work! and two if it did work, it is storing the FK in the embedded table instead of the other way round (I think, I can't get it to run so I can't see exactly how it persists). I suspect the correct approach ends up with a map reference in place of each text that needs translating (the map being locale-content), but I can't see how to do this in a way that allows for multiple maps in one entity (without having corresponding multiple columns in the translatable strings table)...

    Read the article

  • Creating a multiplatform webapp with HTML5 and Google maps

    - by Bart L.
    I'm struggling how to develop a webapp for Android and iOS. My first app was a simple todo app which was easy to test in my browser and it only used html, javascript and css. However, I have to create an app which uses Google Maps Api to get the location. I created a simple html5 page to test which places a marker on a map. It works fine when testing it on my local server. But when I create an .apk file for Android, the app doesn't work. So I'm wondering, isn't it possible to use it like this? Do I have the use the phonegap libraries to use their geolocation library? And if so, how do you handle the development of a webapp in phonegap for multiple OS? Do you have to install an Android environment and an iOS environment to each include the right phonegap library and to test them properly? Update: I use the following code on my webserver and it works perfectly. When I upload it in a zip-folder to the photogap cloud and install the APK file on my phone, it doesn't work. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Simple Geo test</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8/jquery.min.js"></script> </head> <body> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script> <script> function success(position) { var mapcanvas = document.createElement('div'); mapcanvas.id = 'mapcontainer'; mapcanvas.style.height = '200px'; mapcanvas.style.width = '200px'; document.querySelector('article').appendChild(mapcanvas); var coords = new google.maps.LatLng(position.coords.latitude, position.coords.longitude); var options = { zoom: 15, center: coords, mapTypeControl: false, navigationControlOptions: { style: google.maps.NavigationControlStyle.SMALL }, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById("mapcontainer"), options); var marker = new google.maps.Marker({ position: coords, map: map, title:"You are here!" }); } if (navigator.geolocation) { navigator.geolocation.getCurrentPosition(success); } else { error('Geo Location is not supported'); } </script> <article></article> </body> </html>

    Read the article

  • Cisco ASA 5505 site to site IPSEC VPN won't route from multiple LANs

    - by franklundy
    Hi I've set up a standard site to site VPN between 2 ASA 5505s (using the wizard in ASDM) and have the VPN working fine for traffic between Site A and Site B on the directly connected LANs. But this VPN is actually to be used for data originating on LAN subnets that are one hop away from the directly connected LANs. So actually there is another router connected to each ASA (LAN side) that then route to two completely different LAN ranges, where the clients and servers reside. At the moment, any traffic that gets to the ASA that has not originated from the directly connected LAN gets sent straight to the default gateway, and not through the VPN. I've tried adding the additional subnets to the "Protected Networks" on the VPN, but that has no effect. I have also tried adding a static route to each ASA trying to point the traffic to the other side, but again this hasn't worked. Here is the config for one of the sites. This works for traffic to/from the 192.168.144.x subnets perfectly. What I need is to be able to route traffic from 10.1.0.0/24 to 10.2.0.0/24 for example. ASA Version 8.0(3) ! hostname Site1 enable password ** encrypted names name 192.168.144.4 Site2 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.144.2 255.255.255.252 ! interface Vlan2 nameif outside security-level 0 ip address 10.78.254.70 255.255.255.252 (this is a private WAN circuit) ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! passwd ** encrypted ftp mode passive access-list inside_access_in extended permit ip any any access-list outside_access_in extended permit icmp any any echo-reply access-list outside_1_cryptomap extended permit ip 192.168.144.0 255.255.255.252 Site2 255.255.255.252 access-list inside_nat0_outbound extended permit ip 192.168.144.0 255.255.255.252 Site2 255.255.255.252 pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-603.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 access-group inside_access_in in interface inside access-group outside_access_in in interface outside route outside 0.0.0.0 0.0.0.0 10.78.254.69 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute dynamic-access-policy-record DfltAccessPolicy aaa authentication ssh console LOCAL http server enable http 0.0.0.0 0.0.0.0 outside http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 1 match address outside_1_cryptomap crypto map outside_map 1 set pfs crypto map outside_map 1 set peer 10.78.254.66 crypto map outside_map 1 set transform-set ESP-3DES-SHA crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 no crypto isakmp nat-traversal telnet timeout 5 ssh 0.0.0.0 0.0.0.0 outside ssh timeout 5 console timeout 0 management-access inside threat-detection basic-threat threat-detection statistics port threat-detection statistics protocol threat-detection statistics access-list group-policy DfltGrpPolicy attributes vpn-idle-timeout none username enadmin password * encrypted privilege 15 tunnel-group 10.78.254.66 type ipsec-l2l tunnel-group 10.78.254.66 ipsec-attributes pre-shared-key * ! ! prompt hostname context

    Read the article

  • Oracle Functional Testing Suite Advanced Pack for Oracle EBS Now Available

    - by Anne Carlson (Oracle Development)
    There’s new news about automated testing of E-Business Suite using the Oracle Application Testing Suite, a.k.a, “OATS”. E-Business Suite Development is pleased to announce the availability of the new Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite. The new pack, available with the latest release of Oracle Application Testing Suite (12.4.0.2), provides pre-built test components and flows to automate the in-depth testing of Oracle E-Business Suite applications. Designed for use with the Oracle Application Testing Suite and its Oracle Flow Builder capability, these pre-built components and flows can help Oracle E-Business Suite customers to significantly reduce the time and effort needed to create and maintain automated test scripts. The Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite is available now for EBS 12.1.3, and availability for EBS 12.2 is planned. Some Background on Automating Testing with Oracle Application Testing Suite and Oracle Flow Builder      Testing complex packaged applications like Oracle E-Business Suite can be time-consuming and challenging for organizations, hampering their ability to upgrade to latest releases or apply latest patches. Oracle Application Testing Suite offers organizations a unique and powerful testing platform for Oracle E-Business Suite and other Oracle applications. With the 12.3.0.1 release of Oracle Application Testing Suite, we introduced the Oracle Flow Builder testing framework and accompanying starter pack of pre-built test components and flows. The starter pack, which contains over 2000 components and 200 flows, provides broad coverage of commonly-used base functionality and is designed to jump-start the test automation effort. Using Oracle Flow Builder, even non-technical testers can create working test scripts using the pre-built components that Oracle provides. Each component represents an atomic test operation such as “create an invoice batch” or “apply an invoice hold.” Testers can assemble the pre-built components into test flows, and combine test flows with spreadsheet data to drive the testing of multiple data conditions. The Oracle Flow Builder framework allows customers to add, modify and extend the pre-built components to address new functionality and customizations of the Oracle E-Business Suite. Using Oracle Flow Builder’s component-based test generation framework instead of a traditional record/playback approach has allowed the EBS Quality Assurance team to reduce their test automation effort by 60%. E-Business Suite customers can significantly reduce their test automation effort using Oracle Application Testing Suite with Oracle Flow Builder and the pre-built test components and flows that Oracle provides. Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite Improves Test Coverage With the Oracle Application Testing Suite 12.4.0.2 and the new Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite, we are now delivering a significant number of additional test components and flows beyond those contained in the Oracle Flow Builder starter pack. These additional test components and flows provide 70-80% test coverage and enable the automation of detailed and complex test flows across the following Oracle E-Business Suite products: Oracle Asset Lifecycle Management Oracle Channel Revenue Management Oracle Discrete Manufacturing Oracle Incentive Compensation Oracle Lease and Finance Management Oracle Process Manufacturing Oracle Procurement Oracle Project Management Oracle Property Manager Oracle Service Downloads You can download the Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite from the Oracle Technology Network. References Oracle Applications Testing Suite YouTube: Oracle Flow Builder Training YouTube: Oracle Applications Testing Suite and Flow Builder Demonstration Oracle Functional Testing Suite Advanced Pack Readme for E-Business Suite, id=1905989.1">Note 1905989.1 Related Articles Automate Testing Using Oracle Application Testing Suite with Flow Builder for E-Business Suite EBS 12.1.1 Test Starter Kit Now Available for Oracle Applications Testing Suite Oracle Application Testing Suite 9.0 Supported with Oracle E-Business Suite Using the Oracle Application Testing Suite with EBS: Interim Update #1

    Read the article

  • View Weather Underground Forecasts in Google Chrome

    - by Asian Angel
    If you like a simple straightforward interface for keeping up with weather forecasts then join us as we look at the Weather Underground extension for Google Chrome. Weather Underground in Action As soon as you click on the “Toolbar Icon” you will need to enter a location. Keep in mind that you will need to enter the “city and country” if using that option. Going with less information will yield an “error”. Note: The extension did not work for some Asian locations during our tests. In honor of the Olympics we chose Vancouver, Canada. You can hover over the “Toolbar Button” to see the current conditions or click to view the current day’s conditions, the current day’s forecast, and the forecast for the following three days. It is a simple straightforward interface. Note: There are no options to worry with. Clicking on the “Detailed Forecast Link” in the drop-down window will take you to the Weather Underground webpage for your location. Clicking on the “Weather Underground Link” in the drop-down window will take you to the Weather Underground U.S. Homepage. Additional Weather Underground Fun Since we were focusing on Weather Underground we have an extra bit of fun for you. If you love being able to view a “large scale” map of your location with current conditions and forecast combined then you might want to have a look at Weather Underground’s “wxmap webpage”. Using the link below you can access the basic starting page where you will be asked to enter your location. Once you have entered the information you will see the default “Terrain View” for your location and a “Current Conditions & Forecast Window” in the lower left corner. You can modify how your map looks by choosing from “Temperature, Precipitation, Clouds, Satellite, Hybrid, & Terrain” views. Going full screen in your browser with this gives your monitor a wonderful and unique look that will have your family & friends asking you how you did it. Note: Terrain View shown here. Clicking on the “Settings Link” in the upper left corner will let you tweak your map view very nicely. Conclusion If you love using Weather Underground for your weather forecasts then you can add a “double dose” of goodness to your browser. Links Download the Weather Underground extension (Google Chrome Extensions) Access the Full Screen Weather Underground Map & Forecast for your area Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeMonitor the Weather for Your Location in ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • Common business drivers that lead to creating and sustaining a project

    Common business drivers that lead to creating and sustaining a project include and are not limited to: cost reduction, increased return on investment (ROI), reduced time to market, increased speed and efficiency, increased security, and increased interoperability. These drivers primarily focus on streamlining and reducing cost to make a company more profitable with less overhead. According to Answers.com cost reduction is defined as reducing costs to improve profitability, and may be implemented when a company is having financial problems or prevent problems. ROI is defined as the amount of value received relative to the amount of money invested according to PayperclickList.com.  With the ever increasing demands on businesses to compete in today’s market, companies are constantly striving to reduce the time it takes for a concept to become a product and be sold within the global marketplace. In business, some people say time is money, so if a project can reduce the time a business process takes it in fact saves the company which is always good for the bottom line. The Social Security Administration states that data security is the protection of data from accidental or intentional but unauthorized modification, destruction. Interoperability is the capability of a system or subsystem to interact with other systems or subsystems. In my personal opinion, these drivers would not really differ for a profit-based organization, compared to a non-profit organization. Both corporate entities strive to reduce cost, and strive to keep operation budgets low. However, the reasoning behind why they want to achieve this does contrast. Typically profit based organizations strive to increase revenue and market share so that the business can grow. Alternatively, not-for-profit businesses are more interested in increasing their reach within communities whether it is to increase annual donations or invest in the lives of others. Success or failure of a project can be determined by one or more of these drivers based on the scope of a project and the company’s priorities associated with each of the drivers. In addition, if a project attempts to incorporate multiple drivers and is only partially successful, then the project might still be considered to be a success due to how close the project was to meeting each of the priorities. Continuous evaluation of the project could lead to a decision to abort a project, because it is expected to fail before completion. Evaluations should be executed after the completion of every software development process stage. Pfleeger notes that software development process stages include: Requirements Analysis and Definition System Design Program Design Program Implementation Unit Testing Integration Testing System Delivery Maintenance Each evaluation at every state should consider all the business drivers included in the scope of a project for how close they are expected to meet expectations. In addition, minimum requirements of acceptance should also be included with the scope of the project and should be reevaluated as the project progresses to ensure that the project makes good economic sense to continue. If the project falls below these benchmarks then the project should be put on hold until it does make more sense or the project should be aborted because it does not meet the business driver requirements.   References Cost Reduction Program. (n.d.). Dictionary of Accounting Terms. Retrieved July 19, 2009, from Answers.com Web site: http://www.answers.com/topic/cost-reduction-program Government Information Exchange. (n.d.). Government Information Exchange Glossary. Retrieved July 19, 2009, from SSA.gov Web site: http://www.ssa.gov/gix/definitions.html PayPerClickList.com. (n.d.). Glossary Term R - Pay Per Click List. Retrieved July 19, 2009, from PayPerClickList.com Web site: http://www.payperclicklist.com/glossary/termr.html Pfleeger, S & Atlee, J.(2009). Software Engineering: Theory and Practice. Boston:Prentice Hall Veluchamy, Thiyagarajan. (n.d.). Glossary « Thiyagarajan Veluchamy’s Blog. Retrieved July 19, 2009, from Thiyagarajan.WordPress.com Web site: http://thiyagarajan.wordpress.com/glossary/

    Read the article

  • WebCenter Customer Spotlight: Texas Industries, Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. TXI is based in Dallas and employs around 2,000 employees. The customer had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. TXI implemented a centralized solution leveraging Oracle WebCenter Imaging, a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and Oracle WebCenter Forms Recognition to send  the invoices through to Oracle Financials for approvals and processing.  TXI significantly lowered resource needs for payable processing,  increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average. Company OverviewTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. With operating subsidiaries in six states, TXI is the largest producer of cement in Texas and a major producer in California. TXI is a major supplier of stone, sand, gravel, and expanded shale and clay products, and one of the largest producers of bagged cement and concrete  products in the Southwest. Business ChallengesTXI had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. Their business objectives were: Increase the efficiency of core business processes, such as invoice processing, to support the organization’s desire to maintain its role as the Southwest’s leader in delivering high-quality, low-cost products to the construction industry Meet the audit and regulatory requirements for achieving Sarbanes-Oxley (SOX) compliance Streamline entry of 180,000 invoices annually to accelerate processing, reduce errors, cut invoice storage and routing costs, and increase visibility into payables liabilities Solution DeployedTXI replaced a resource-intensive, paper-based, decentralized process for invoice entry with a centralized solution leveraging Oracle WebCenter Imaging 11g. They worked with the Oracle Partner Keste LLC to develop a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and then uses Oracle WebCenter Forms Recognition and the Oracle WebCenter Imaging workflow to send the invoices through to Oracle Financials for approvals and processing. Business Results Significantly lowered resource needs for payable processing through centralization and improved efficiency  Enabled the company to process invoices faster and pay bills earlier, allowing it to take advantage of additional vendor discounts Tracked to increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average Achieved a 25% reduction in paper invoice storage costs now that invoices are captured digitally, and enabled a 50% reduction in shipping costs, as the company no longer has to send paper invoices between headquarters and production facilities for approvals “Entering and manually processing more than 180,000 vendor invoices annually was time and labor intensive. With Oracle Imaging and Process Management, we have automated and centralized invoice entry and processing at our corporate office, improving productivity by 80% and reducing invoice processing cycle times by 84%—a very important efficiency gain.” Terry Marshall, Vice President of Information Services, Texas Industries, Inc. Additional Information TXI Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle WebCenter Forms Recognition

    Read the article

  • Industry perspectives on managing content

    - by aahluwalia
    Earlier this week I was noodling over a topic for my first blog post. My intention for this blog is to bring a practitioner's perspective on ECM to the community; to share and collaborate on best practices and approaches that address today's business problems. Reviewing my past 14 years of experience with web technologies, I wondered what topic would serve as a good "conversation starter". During this time, I received a call from a friend who was seeking insights on how content management applies to specific industries. She approached me because she vaguely remembered that I had worked in the Health Insurance industry in the recent past. She wanted me to tell her about the specific business needs of this industry. She was in for quite a surprise as she found out that I had spent the better part of a decade managing content within the Health Insurance industry and I discovered a great topic for my first blog post! I offer some insights from Health Insurance and invite my fellow practitioners to share their insights from other industries. What does content management mean to these industries? What can solution providers be aware of when offering solutions to these industries? The United States health care system relies heavily on private health insurance, which is the primary source of coverage for approximately 58% Americans. In the late 19th century, "accident insurance" began to be available, which operated much like modern disability insurance. In the late 20th century, traditional disability insurance evolved into modern health insurance programs. The first thing a solution provider must be aware of about the Health Insurance industry is that it tends to be transaction intensive. They are the ones who manage and administer our health plans and process our claims when we visit our health care providers. It helps to keep in mind that they are in the business of delivering health insurance and not technology. You may find the mindset conservative in comparison to the IT industry, however, the Health Insurance industry has benefited and will continue to benefit from the efficiency that technology brings to traditionally paper-driven processes. We are all aware of the impact that Healthcare reform bill has had a significant impact on the Health Insurance industry. They are under a great deal of pressure to explore ways to reduce their administrative costs and increase operational efficiency. Overall, administrative costs of health insurance include the insurer's cost to administer the health plan, the costs borne by employers, health-care providers, governments and individual consumers. Inefficiencies plague health insurance, owing largely to the absence of standardized processes across the industry. To achieve this, industry leaders have come together to establish standards and invest in initiatives to help their healthcare provider partners transition to the next generation of healthcare technology. The move to online services and paperless explanation of benefits are some manifestations of technological advancements in health insurance. Several companies have adopted Toyota's LEAN methodology or Six Sigma principles to improve quality, reduce waste and excessive costs, thereby increasing the value of their plan offerings. A growing number of health insurance companies have transformed their business systems in the past decade alone and adopted some form of content management to reduce the costs involved in administering health plans. The key strategy has been to convert paper documents and forms into electronic formats, automate the content development process and securely distribute content to various audiences via diverse marketing channels, including web and mobile. Enterprise content management solutions can enable document capture of claim forms, manage digital assets, integrate with Enterprise Resource Planning (ERP) and Human Capital Management (HCM) solutions, build Business Process Management (BPM) processes, define retention and disposition instructions to comply with state and federal regulations and allow eBusiness and Marketing departments to develop and deliver web content to multiple websites, mobile devices and portals. Content can be shared securely within and outside the organization using Information Rights Management.  At the end of the day, solution providers who can translate strategic goals into solutions that maximize process automation, increase ease of use and minimize IT overhead are likely to be successful in today's health insurance environment.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >