Search Results

Search found 11567 results on 463 pages for 'map provider'.

Page 121/463 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Android: ActivityThread.performLaunchActivity error

    - by fordays
    Hi, I'm getting an ActivityThread.performLaunchActivity(ActivityThread$ActivityRecord,Intent) error each time I boot up my program in the debugger. The program won't even start up! Any help would be greatly appreciated! I'm very new to this environment. Let me know if you need anymore information/code to help me out. Here is my logcat: 06-09 11:16:26.848: ERROR/vold(27): Error opening switch name path '/sys/class/switch/test2' (No such file or directory) 06-09 11:16:26.848: ERROR/vold(27): Error bootstrapping switch '/sys/class/switch/test2' (No such file or directory) 06-09 11:16:26.848: ERROR/vold(27): Error opening switch name path '/sys/class/switch/test' (No such file or directory) 06-09 11:16:26.848: ERROR/vold(27): Error bootstrapping switch '/sys/class/switch/test' (No such file or directory) 06-09 11:16:37.887: ERROR/MemoryHeapBase(53): error opening /dev/pmem: No such file or directory 06-09 11:16:37.887: ERROR/SurfaceFlinger(53): Couldn't open /sys/power/wait_for_fb_sleep or /sys/power/wait_for_fb_wake 06-09 11:16:37.927: ERROR/libEGL(53): couldn't load <libhgl.so> library (Cannot load library: load_library[984]: Library 'libhgl.so' not found) 06-09 11:16:38.407: ERROR/libEGL(64): couldn't load <libhgl.so> library (Cannot load library: load_library[984]: Library 'libhgl.so' not found) 06-09 11:16:41.358: ERROR/BatteryService(53): Could not open '/sys/class/power_supply/usb/online' 06-09 11:16:41.367: ERROR/BatteryService(53): Could not open '/sys/class/power_supply/battery/batt_vol' 06-09 11:16:41.367: ERROR/BatteryService(53): Could not open '/sys/class/power_supply/battery/batt_temp' 06-09 11:16:41.667: ERROR/EventHub(53): could not get driver version for /dev/input/mouse0, Not a typewriter 06-09 11:16:41.667: ERROR/EventHub(53): could not get driver version for /dev/input/mice, Not a typewriter 06-09 11:16:41.797: ERROR/System(53): Failure starting core service 06-09 11:16:41.797: ERROR/System(53): java.lang.SecurityException 06-09 11:16:41.797: ERROR/System(53): at android.os.BinderProxy.transact(Native Method) 06-09 11:16:41.797: ERROR/System(53): at android.os.ServiceManagerProxy.addService(ServiceManagerNative.java:146) 06-09 11:16:41.797: ERROR/System(53): at android.os.ServiceManager.addService(ServiceManager.java:72) 06-09 11:16:41.797: ERROR/System(53): at com.android.server.ServerThread.run(SystemServer.java:162) 06-09 11:16:41.797: ERROR/AndroidRuntime(53): Crash logging skipped, no checkin service 06-09 11:16:42.777: ERROR/LockPatternKeyguardView(53): Failed to bind to GLS while checking for account 06-09 11:16:46.557: ERROR/ActivityThread(111): Failed to find provider info for com.google.settings 06-09 11:16:46.577: ERROR/ActivityThread(111): Failed to find provider info for com.google.settings 06-09 11:16:49.087: ERROR/ApplicationContext(53): Couldn't create directory for SharedPreferences file shared_prefs/wallpaper-hints.xml 06-09 11:16:51.146: ERROR/ActivityThread(108): Failed to find provider info for android.server.checkin 06-09 11:16:54.266: ERROR/ActivityThread(108): Failed to find provider info for android.server.checkin 06-09 11:16:54.416: ERROR/ActivityThread(108): Failed to find provider info for android.server.checkin 06-09 11:16:56.336: ERROR/MediaPlayerService(31): Couldn't open fd for content://settings/system/notification_sound 06-09 11:16:56.356: ERROR/MediaPlayer(53): Unable to to create media player 06-09 11:16:56.637: ERROR/AndroidRuntime(201): Uncaught handler: thread main exiting due to uncaught exception 06-09 11:16:56.757: ERROR/AndroidRuntime(201): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.svgeeks.kidneytest/com.svgeeks.kidneytest.KidneyTest}: java.lang.ClassCastException: android.widget.EditText 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2401) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.os.Handler.dispatchMessage(Handler.java:99) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.os.Looper.loop(Looper.java:123) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.main(ActivityThread.java:4203) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at java.lang.reflect.Method.invokeNative(Native Method) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at java.lang.reflect.Method.invoke(Method.java:521) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at dalvik.system.NativeStart.main(Native Method) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): Caused by: java.lang.ClassCastException: android.widget.EditText 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at com.svgeeks.kidneytest.KidneyTest.onCreate(KidneyTest.java:57) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): ... 11 more 06-09 11:16:56.876: ERROR/dalvikvm(201): Unable to open stack trace file '/data/anr/traces.txt': Permission denied

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Reuse security code between WCF and MVC.NET

    - by mrjoltcola
    First the background: I jumped into MVC.NET from the Java MVC world, so my implementation below is possibly cheating, I don't know. I avoided fooling with a custom membership provider and I just implemented the base code needed to authenticate and load roles in my LogOn action. Typically I just need to check roles programatically, and have no use for all of the other membership features, so I didn't originally think I needed a full Membership provider. I have a successful WCF project with a custom authentication and authorization layer that I did at least write per the proper API. I implemented it with custom IPrincipal, UserNamePasswordValidator and IAuthorizationPolicy classes to load from an Oracle database. In my WCF services, I use declarative security: [PrincipalPermission(SecurityAction.Demand, Role="ADMIN")]. The question (on the ASP.NET/MCV.NET side): All my reading indicates I should implement a custom Membership/Roles provider, and use [Authorize(Roles="ADMIN")] on my controller actions. At this point, I don't have a true Membership provider, but I'm using the same User class that implements the IPrincipal interface that works with the WCF security. I plan to share common code between the WCF and ASP.NET modules. So my LogOn action is not using the FormsService (and I assume this is bad). I had commented it out, and just used my "UserService" to access the Oracle db. Note my "TODO" comment below. public ActionResult LogOn(LogOnModel model, string returnUrl) { log.Info("Login attempt by " + model.UserName); if (ModelState.IsValid) { User user = userService.findByUserName(model.UserName); // Commented original MemberShipService code, this is probably bad // if (MembershipService.ValidateUser(model.UserName, model.Password)) if (user != null && user.Authenticate(model.Password) == true) { log.Info("Login success by " + model.UserName); FormsService.SignIn(model.UserName, model.RememberMe); // TODO: Override with Custom identity / roles? user.AddRoles(userService.listRolesByUser(user)); // pull in roles from db if (!String.IsNullOrEmpty(returnUrl)) return Redirect(returnUrl); else return RedirectToAction("Index", "Home"); } else { log.Info("Login failure by " + model.UserName); ModelState.AddModelError("", "The user name or password provided is incorrect."); } } // If we got this far, something failed, redisplay form return View(model); } So can I make the above work? Can I stick the IPrincipal (User) into the CurrentContext or HttpContext? Can I integrate the custom IPrincipal I've already created without writing a full Membership/Roles Provider? I currently stick the User object into the session and access it from all MVC.NET controllers with "CurrentUser" property which grabs it from the session on demand. But this doesn't work with the [Authorize] attribute; I assume that is because it knows nothing about my custom Principal in the session, and is instead using whatever FormsService.SignIn() produces. I also found that session timeouts screw up the login redirect, the user doesn't get forwarded, instead we get a null exception accessing User from the session, and I assume it is related to my "skipping steps" to get a quick implementation. Thanks.

    Read the article

  • How to update all the SSIS packages&rsquo; Connection Managers in a BIDS project with PowerShell

    - by Luca Zavarella
    During the development of a BI solution, we all know that 80% of the time is spent during the ETL (Extract, Transform, Load) phase. If you use the BI Stack Tool provided by Microsoft SQL Server, this step is accomplished by the development of n Integration Services (SSIS) packages. In general, the number of packages made ??in the ETL phase for a non-trivial solution of BI is quite significant. An SSIS package, therefore, extracts data from a source, it "hammers" :) the data and then transfers it to a specific destination. Very often it happens that the connection to the source data is the same for all packages. Using Integration Services, this results in having the same Connection Manager (perhaps with the same name) for all packages: The source data of my BI solution comes from an Helper database (HLP), then, for each package tha import this data, I have the HLP Connection Manager (the use of a Shared Data Source is not recommended, because the Connection String is wired and therefore you have to open the SSIS project and use the proper wizard change it...). In order to change the HLP Connection String at runtime, we could use the Package Configuration, or we could run our packages with DTLoggedExec by Davide Mauri (a must-have if you are developing with SQL Server 2005/2008). But my need was to change all the HLP connections in all packages within the SSIS Visual Studio project, because I had to version them through Team Foundation Server (TFS). A good scribe with a lot of patience should have changed by hand all the connections by double-clicking the HLP Connection Manager of each package, and then changing the referenced server/database: Not being endowed with such virtues :) I took just a little of time to write a small script in PowerShell, using the fact that a SSIS package (a .dtsx file) is nothing but an xml file, and therefore can be changed quite easily. I'm not a guru of PowerShell, but I managed more or less to put together the following lines of code: $LeftDelimiterString = "Initial Catalog=" $RightDelimiterString = ";Provider=" $ToBeReplacedString = "AstarteToBeReplaced" $ReplacingString = "AstarteReplacing" $MainFolder = "C:\MySSISPackagesFolder" $files = get-childitem "$MainFolder" *.dtsx `       | Where-Object {!($_.PSIsContainer)} foreach ($file in $files) {       (Get-Content $file.FullName) `             | % {$_ -replace "($LeftDelimiterString)($ToBeReplacedString)($RightDelimiterString)", "`$1$ReplacingString`$3"} ` | Set-Content $file.FullName; } The script above just opens any SSIS package (.dtsx) in the supplied folder, then for each of them goes in search of the following text: Initial Catalog=AstarteToBeReplaced;Provider= and it replaces the text found with this: Initial Catalog=AstarteReplacing;Provider= I don’t enter into the details of each cmdlet used. I leave the reader to search for these details. Alternatively, you can use a specific object model exposed in some .NET assemblies provided by Integration Services, or you can use the Pacman utility: Enjoy! :) P.S. Using TFS as versioning system, before running the script I checked out the packages and, after the script executed succesfully, I checked in them.

    Read the article

  • Exception:Cannot Start your application.The Workgroup information file is missing or opened exclusiv

    - by Jeev
    We were getting this error when trying   to connect  to a password protected access file. This is what the connection string looked likestring conString =@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source="Path to your access file";User Id=;Password=password";To fix the issue this is what we didstring conString =@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source="Path to your access file";Jet OLEDB:Database Password=password";  We removed the User id and changed the password to Jet OLEDB:Database Password Hope this helps someone   

    Read the article

  • User Lockout & WLST

    - by Bala Kothandaraman
    WebLogic server provides an option to lockout users to protect accounts password guessing attack. It is implemented with a realm-wide Lockout Manager. This feature can be used with custom authentication provider also. But if you implement your own authentication provider and wish to implement your own lockout manager that is possible too. If your domain is configured to use the user lockout manager the following WLST script will help you to: - check whether a user is locked using a WLST script - find out the number of locked users in the realm #Define constants url='t3://localhost:7001' username='weblogic' password='weblogic' checkuser='test-deployer' #Connect connect(username,password,url) #Get Lockout Manager Runtime serverRuntime() dr = cmo.getServerSecurityRuntime().getDefaultRealmRuntime() ulmr = dr.getUserLockoutManagerRuntime() print '-------------------------------------------' #Check whether a user is locked if (ulmr.isLockedOut(checkuser) == 0): islocked = 'NOT locked' else: islocked = 'locked' print 'User ' + checkuser + ' is ' + islocked #Print number of locked users print 'No. of locked user - ', Integer(ulmr.getUserLockoutTotalCount()) print '-------------------------------------------' print '' #Disconnect & Exit disconnect() exit()

    Read the article

  • Sixeyed.Caching available now on NuGet and GitHub!

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/22/sixeyed.caching-available-now-on-nuget-and-github.aspxThe good guys at Pluralsight have okayed me to publish my caching framework (as seen in Caching in the .NET Stack: Inside-Out) as an open-source library, and it’s out now. You can get it here: Sixeyed.Caching source code on GitHub, and here: Sixeyed.Caching package v1.0.0 on NuGet. If you haven’t seen the course, there’s a preview here on YouTube: In-Process and Out-of-Process Caches, which gives a good flavour. The library is a wrapper around various cache providers, including the .NET MemoryCache, AppFabric cache, and  memcached*. All the wrappers inherit from a base class which gives you a set of common functionality against all the cache implementations: •    inherits OutputCacheProvider, so you can use your chosen cache provider as an ASP.NET output cache; •    serialization and encryption, so you can configure whether you want your cache items serialized (XML, JSON or binary) and encrypted; •    instrumentation, you can optionally use performance counters to monitor cache attempts and hits, at a low level. The framework wraps up different caches into an ICache interface, and it lets you use a provider directly like this: Cache.Memory.Get<RefData>(refDataKey); - or with configuration to use the default cache provider: Cache.Default.Get<RefData>(refDataKey); The library uses Unity’s interception framework to implement AOP caching, which you can use by flagging methods with the [Cache] attribute: [Cache] public RefData GetItem(string refDataKey) - and you can be more specific on the required cache behaviour: [Cache(CacheType=CacheType.Memory, Days=1] public RefData GetItem(string refDataKey) - or really specific: [Cache(CacheType=CacheType.Disk, SerializationFormat=SerializationFormat.Json, Hours=2, Minutes=59)] public RefData GetItem(string refDataKey) Provided you get instances of classes with cacheable methods from the container, the attributed method results will be cached, and repeated calls will be fetched from the cache. You can also set a bunch of cache defaults in application config, like whether to use encryption and instrumentation, and whether the cache system is enabled at all: <sixeyed.caching enabled="true"> <performanceCounters instrumentCacheTotalCounts="true" instrumentCacheTargetCounts="true" categoryNamePrefix ="Sixeyed.Caching.Tests"/> <encryption enabled="true" key="1234567890abcdef1234567890abcdef" iv="1234567890abcdef"/> <!-- key must be 32 characters, IV must be 16 characters--> </sixeyed.caching> For AOP and methods flagged with the cache attribute, you can override the compile-time cache settings at runtime with more config (keyed by the class and method name): <sixeyed.caching enabled="true"> <targets> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheConfiguredInternal" enabled="false"/> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheExpiresConfiguredInternal" seconds="1"/> </targets> It’s released under the MIT license, so you can use it freely in your own apps and modify as required. I’ll be adding more content to the GitHub wiki, which will be the main source of documentation, but for now there’s an FAQ to get you started. * - in the course the framework library also wraps NCache Express, but there's no public redistributable library that I can find, so it's not in Sixeyed.Caching.

    Read the article

  • 409 CONFLICT : MAAS

    - by amir beygi
    I have some problem with my MAAS. juju bootstrap result: 2012-08-31 03:59:17,721 INFO Bootstrapping environment 'maas' (origin: distro type: maas)... Unexpected Error interacting with provider: 409 CONFLICT 2012-08-31 03:59:17,951 ERROR Unexpected Error interacting with provider: 409 CONFLICT Also i have 3 nodes in Commissioning status (delete node is disable and no start button) , DHCP seems working because LAN boot is working but boot but ends with : ALERT! /dev/disk/by-label/cloudimg-rootfs does not exist. Dropping to a shell! BusyBox.... (initramfs)

    Read the article

  • Reach for the Stars…Even if you Miss you’ll Land in the Cloud

    - by Kristin Rose
    “You make investment in the next generation of technology, while continuing to invest in your existing.” – Larry Ellison Last week’s Oracle Cloud and Oracle Platinum Services announcement highlighted some of the exciting ways in which Oracle made the switch from being an On-Premise Application provider to both an On-Premise and Cloud Application provider. The announcement was lead by Oracle CEO Larry Ellison, and Oracle President Mark Hurd. Together they announced the industry’s broadest and most advanced Cloud strategy and introduced Oracle Cloud Social Services, a broad Enterprise Social Platform offering. Attendees also anxiously awaited Larry’s first tweet.Be sure to watch the webcast replay below to learn more about the new developments in Oracle's Cloud strategy, and game-changing advances in Oracle Support. Sending you Cloud Dreams and Twitter Wishes,The OPN Communications Team

    Read the article

  • Oracle and ATG: The Next Generation of Customer Experience

    - by divya.malik
    Oracle today announced that it has completed the acquisition of Art Technology Group (ATG), Inc. In a webcast this morning, Thomas Kurian, Executive Vice President, Oracle Anthony Lye, Senior Vice President, CRM at Oracle and  Ken Volpe, Senior Vice President of Products and Technology from ATG, presented the rationale, strategy and future direction with this acquisition, ATG is a leading E-Commerce service provider and Oracle is a leading CRM and Retail Applications provider, which makes it a winning team. There has been a lot of positive feedback from the analysts, press as well as customers. “As a customer of both Oracle and ATG, we view the integration of the two companies as a natural fit,” said Kevin Cunnington, Global Head of Online, Vodafone Group. “We look forward to new efficiencies that address our online and cross-channel business strategies and help us further provide superior customer experiences.” For more information about Oracle and ATG: Overiew and FAQs Webcast Press Release Technorati Tags: oracle,oracle siebel crm,atg,crm

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • Creating Wildcard Certificates with makecert.exe

    - by Shawn Cicoria
    Be nice to be able to make wildcard certificates for use in development with makecert – turns out, it’s real easy.  Just ensure that your CN=  is the wildcard string to use. The following sequence generates a CA cert, then the public/private key pair for a wildcard certificate REM make the CA makecert -pe -n "CN=*.contosotest.com" -a sha1 -len 2048 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -ic CA.cer -iv CA.pvk -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 -sv wildcard.pvk wildcard.cer pvk2pfx -pvk wildcard.pvk -spc wildcard.cer -pfx wildcard.pfx REM now make the server wildcard cert makecert -pe -n "CN=*.contosotest.com" -a sha1 -len 2048 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -ic CA.cer -iv CA.pvk -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 -sv wildcard.pvk wildcard.cer pvk2pfx -pvk wildcard.pvk -spc wildcard.cer -pfx wildcard.pfx

    Read the article

  • Adding UCM as a search source in Windows Explorer

    - by kyle.hatlestad
    A customer recently pointed out to me that Windows 7 supports federated search within Windows Explorer. This means you can perform searches to external sources such as Google, Flickr, YouTube, etc right from within Explorer. While we do have the Desktop Integration Suite which offers searching within Explorer, I thought it would be interesting to look into this method which would not require any client software to implement. Basically, federated searching hooks up in Windows Explorer through the OpenSearch protocol. A Search Connector Descriptor file is run and it installs the search provider. The file is a .osdx file which is an OpenSearch Description document. It describes the search provider you are hooking up to along with the URL for the query. If those results can come back as an RSS or ATOM feed, then you're all set. So the first step is to install the RSS Feeds component from the UCM Samples page on OTN. If you're on 11g, I've found the RSS Feeds works just fine on that version too. Next, you want to perform a Quick Search with a particular search term and then copy the RSS link address for that search result. Here is what an example URL might looks like: http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&feedName=search_results&QueryText=%28+%3cqsch%3eoracle%3c%2fqsch %3e+%29&SortField=dInDate&SortOrder=Desc&ResultCount=20&SearchQueryFormat= Universal&SearchProviders=server& Now you want to create a new text file and start out with this information: <?xml version="1.0" encoding="UTF-8"?><OpenSearchDescription xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName></ShortName> <Description></Description> <Url type="application/rss+xml" template=""/> <Url type="text/html" template=""/> </OpenSearchDescription> Enter a ShortName and Description. The ShortName will be the value used when displaying the search provider in Explorer. In the template attribute for the first Url element, enter the URL copied previously. You will then need to convert the ampersand symbols to '&' to make them XML compliant. Finally, you'll want to switch out the search term with '{searchTerms}'. For the second Url element, you can do the same thing except you want to copy the UCM search results URL from the page of results. That URL will look something like: http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&SortField=dInDate&SortOrder=Desc&ResultCount=20&QueryText=%3Cqsch%3Eoracle%3C%2Fqsch%3E&listTemplateId= &ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection= &MiniSearchText=oracle Again, convert the ampersand symbols and replace the search term with '{searchTerms}'. When complete, save the file with the .osdx extension. The completed file should look like: <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName>Universal Content Management</ShortName> <Description>OpenSearch for UCM via Windows 7 Search Federation.</Description> <Url type="application/rss+xml" template="http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&amp;feedName=search_results&amp;QueryText=%28+%3Cqsch%3E{searchTerms}%3C%2fqsch%3E+%29&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=200&amp;SearchQueryFormat=Universal"/> <Url type="text/html" template="http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=20&amp;QueryText=%3Cqsch%3E{searchTerms}%3C%2Fqsch%3E&amp;listTemplateId=&amp;ftx=1&amp;SearchQueryFormat=Universal&amp;TargetedQuickSearchSelection=&amp;MiniSearchText={searchTerms}"/> </OpenSearchDescription> After you save the file, simply double-click it to create the provider. It will ask if you want to add the search connector to Windows. Click Add and it will add it to the Searches folder in your user folder as well as your Favorites. Now just click on the search icon and in the upper right search box, enter your term. As you are typing, it begins executing searches and the results will come back in Explorer. Now when you double-click on an item, it will try and download the web viewable for viewing. You also have the ability to save the search, just as you would in UCM. And there is a link to Search On Website which will launch your browser and go directly to the search results page there. And with some tweaks to the RSS component, you can make the results a bit more interesting. It supports the Media RSS standard, so you can pass along the thumbnail of the documents in the results. To enable this, edit the rss_resources.htm file in the RSS Feeds component. In the std_rss_feed_begin resource include, add the namespace 'xmlns:media="http://search.yahoo.com/mrss/' to the rss definition: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:media="http://search.yahoo.com/mrss/"> Next, in the rss_channel_item_with_thumb include, below the closing image element, add this element: </images> <media:thumbnail url="<$if strIndexOf(thumbnailUrl, "@t") > 0 or strIndexOf(thumbnailUrl, "@g") > 0 or strIndexOf(thumbnailUrl, "@p") > 0$><$rssHttpHost$><$thumbnailUrl$><$elseif dGif$><$HttpWebRoot$>images/docgifs/<$dGif$><$endif$>" /> <description> This and lots of other tweaks can be done to the RSS component to help extend it for optimum use in Explorer. Hopefully this can get you started. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • Trouble with OpenLayers Styles.

    - by Jenny
    So, tired of always seeing the bright orange default regular polygons, I'm trying to learn to style OpenLayers. I've had some success with: var layer_style = OpenLayers.Util.extend({},OpenLayers.Feature.Vector.style['default']); layer_style.fillColor = "#000000"; layer_style.strokeColor = "#000000"; polygonLayer = new OpenLayers.Layer.Vector("PolygonLayer"); polygonLayer.style = layer_style; But sine I am drawing my polygons with DrawFeature, my style only takes effect once I've finished drawing, and seeing it snap from bright orange to grey is sort of disconcerting. So, I learned about temporary styles, and tried: var layer_style = new OpenLayers.Style({"default": {fillColor: "#000000"}, "temporary": {fillColor: "#000000"}}) polygonLayer = new OpenLayers.Layer.Vector("PolygonLayer"); polygonLayer.style = layer_style; This got me a still orange square--until I stopped drawing, when it snapped into completely opaque black. I figured maybe I had to explicitly set the fillOpacity...no dice. Even when I changed both fill colors to be pink and blue, respectively, I still saw only orange and opaque black. I've tried messing with StyleMaps, since I read that if you only add one style to a style map, it uses the default one for everything, including the temporary style. var layer_style = OpenLayers.Util.extend({}, OpenLayers.Feature.Vector.style['default']); var style_map = new OpenLayers.StyleMap(layer_style); polygonLayer = new OpenLayers.Layer.Vector("PolygonLayer"); polygonLayer.style = style_map; That got me the black opaque square, too. (Even though that layer style works when not given to a map). Passing the map to the layer itself like so: polygonLayer = new OpenLayers.Layer.Vector("PolygonLayer", style_map); Didn't get me anything at all. Orange all the way, even after drawn. polygonLayer = new OpenLayers.Layer.Vector("PolygonLayer", {styleMap: style_map}); Is a lot more succesful: Orange while drawing, translucent black with black outline when drawn. Just like when I didn't use a map. Problem is, still no temporary... So, I tried initializing my map this way: var style_map = new OpenLayers.StyleMap({"default": layer_style, "temporary": layer_style}); No opaque square, but no dice for the temporary, either... Still orange snapping to black transparent. Even if I make a new Style (layer_style2), and set temporary to that, still no luck. And no luck with setting "select" style, either. What am I doing wrong? Temporary IS for styling things that are currently being sketched, correct? Is there some other way specific to the drawFeature Controller? Edit: setting extendDefault to be true doesn't seem to help, either... var style_map = new OpenLayers.StyleMap({"default": layer_style, "temporary": layer_style}, {"extendDefault": "true"});

    Read the article

  • Android - Custom Icons in ListView

    - by Ryan
    Is there any way to place a custom icon for each group item? Like for phone I'd like to place a phone, for housing I'd like to place a house. Here is my code, but it keeps throwing a Warning and locks up on me. ListView myList = (ListView) findViewById(R.id.myList); //ExpandableListAdapter adapter = new MyExpandableListAdapter(data); List<Map<String, Object>> groupData = new ArrayList<Map<String, Object>>(); Iterator it = data.entrySet().iterator(); while (it.hasNext()) { //Get the key name and value for it Map.Entry pair = (Map.Entry)it.next(); String keyName = (String) pair.getKey(); String value = pair.getValue().toString(); //Add the parents -- aka main categories Map<String, Object> curGroupMap = new HashMap<String, Object>(); groupData.add(curGroupMap); if (value == "Phone") curGroupMap.put("ICON", findViewById(R.drawable.phone)); else if (value == "Housing") curGroupMap.put("NAME", keyName); curGroupMap.put("VALUE", value); } // Set up our adapter mAdapter = new SimpleAdapter( mContext, groupData, R.layout.exp_list_parent, new String[] { "ICON", "NAME", "VALUE" }, new int[] { R.id.iconImg, R.id.rowText1, R.id.rowText2 } ); myList.setAdapter(mAdapter); The error i'm getting: 05-28 17:36:21.738: WARN/System.err(494): java.io.IOException: Is a directory 05-28 17:36:21.809: WARN/System.err(494): at org.apache.harmony.luni.platform.OSFileSystem.readImpl(Native Method) 05-28 17:36:21.838: WARN/System.err(494): at org.apache.harmony.luni.platform.OSFileSystem.read(OSFileSystem.java:158) 05-28 17:36:21.851: WARN/System.err(494): at java.io.FileInputStream.read(FileInputStream.java:319) 05-28 17:36:21.879: WARN/System.err(494): at java.io.BufferedInputStream.fillbuf(BufferedInputStream.java:183) 05-28 17:36:21.908: WARN/System.err(494): at java.io.BufferedInputStream.read(BufferedInputStream.java:346) 05-28 17:36:21.918: WARN/System.err(494): at android.graphics.BitmapFactory.nativeDecodeStream(Native Method) 05-28 17:36:21.937: WARN/System.err(494): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:459) 05-28 17:36:21.948: WARN/System.err(494): at android.graphics.BitmapFactory.decodeFile(BitmapFactory.java:271) 05-28 17:36:21.958: WARN/System.err(494): at android.graphics.BitmapFactory.decodeFile(BitmapFactory.java:296) 05-28 17:36:21.978: WARN/System.err(494): at android.graphics.drawable.Drawable.createFromPath(Drawable.java:801) 05-28 17:36:21.988: WARN/System.err(494): at android.widget.ImageView.resolveUri(ImageView.java:501) 05-28 17:36:21.998: WARN/System.err(494): at android.widget.ImageView.setImageURI(ImageView.java:289) Thanks in advance for your help!!

    Read the article

  • JAXB + JAK java.lang.ClassCastException

    - by Ivansek
    Hi, I read page about JAK implementation where is a piece of code from pom.xml file (Listing 1). This piece of code is actually commented in pom.xml file so i uncommented it in order to add my own schema to compile. After that i run command mvn -e clean install, and i get this error: ivansek ~/Sites/Xlab/KMLTest/javaapiforkml-read-only $ mvn -e clean install + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building a Java API for Kml [INFO] task-segment: [clean, install] [INFO] ------------------------------------------------------------------------ [INFO] [clean:clean {execution: default-clean}] [INFO] [antrun:run {execution: xjc-invocation}] [INFO] Executing tasks [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An Ant BuildException has occured: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: An Ant BuildException has occured: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant BuildException has occured: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at org.apache.maven.plugin.antrun.AbstractAntMojo.executeTasks(AbstractAntMojo.java:131) at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:98) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:116) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:357) at org.apache.maven.plugin.antrun.AbstractAntMojo.executeTasks(AbstractAntMojo.java:118) ... 20 more Caused by: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at java.util.ServiceLoader.fail(ServiceLoader.java:207) at java.util.ServiceLoader.access$100(ServiceLoader.java:164) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:353) at java.util.ServiceLoader$1.next(ServiceLoader.java:421) at com.sun.tools.xjc.Options.findServices(Options.java:910) at com.sun.tools.xjc.Options.getAllPlugins(Options.java:351) at com.sun.tools.xjc.Options.parseArgument(Options.java:650) at com.sun.tools.xjc.Options.parseArguments(Options.java:760) at com.sun.tools.xjc.XJC2Task._doXJC(XJC2Task.java:453) at com.sun.tools.xjc.XJC2Task.doXJC(XJC2Task.java:443) at com.sun.tools.xjc.XJC2Task.execute(XJC2Task.java:369) at com.sun.istack.tools.ProtectedTask.execute(ProtectedTask.java:55) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) ... 23 more Caused by: java.lang.ClassCastException at java.lang.Class.cast(Class.java:2990) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:345) ... 38 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Thu May 13 09:53:19 CEST 2010 [INFO] Final Memory: 16M/79M [INFO] ------------------------------------------------------------------------ Any suggestions?

    Read the article

  • Sqlite &amp; Entity Framework 4

    - by Dane Morgridge
    I have been working on a few client app projects in my spare time that need to persist small amounts of data and have been looking for an easy to use embedded database.  I really like db4o but I'm not wanting to open source this particular project so it was not an option.  Then I remembered that there was an ADO.NET provider for sqlite.  Being a fan of sqlite in general, I downloaded it and gave it an install.  The installer added tooling support for both Visual Studio 2008 & 2010 which is nice because I am working almost exclusively in 2010 at the moment.  I noticed that the provider also had support for Entity Framework, but not specifically v4.  I created a database using the tools that get installed with Visual Studio and all seemed to work fine.  I went on to create an Entity Framework context and selected the sqlite database and to my surprise it worked with out any problems.  The model showed up just like it would for any database and so I started to write a little code to test and then.. BAM!.. Exception. "Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information." A quick bit of searching on Bing found the answer.  To get it working, you need to include the following code in your web.config file: 1: <startup useLegacyV2RuntimeActivationPolicy="true"> 2: <supportedRuntime version="v4.0" /> 3: </startup> And then everything magically works.  Entity Framework 4 features worked, like lazy loading and even the POCO templates worked.  The only thing that didn't work was the model first development.  The SQL generated was for SQL Server and of course wouldn't run on sqlite without some modifications. The only other oddity I found was that in order to have an auto incrementing id, you have to use the full integer data type for sqlite; a regular int won't do the trick.  This translates to an Int64, or a long when working with it in Entity Framework.  Not a big deal, but something you need to be aware of. All in all, I am quite impressed with the Entity Framework support I found with sqlite.  I wasn't really expecting much at all, and I was pleasantly surprised. I downloaded the ADO.NET sqlite provider from http://sqlite.phxsoftware.com/.  If you want to use an embedded database with Entity Framework, give it a look.  It will be well worth your time.

    Read the article

  • Analyst Firm Gives Oracle Highest Rating for Local Government CRM

    - by michael.seback
    Gartner, Inc. has given Oracle a rating of "Strong Positive," the highest possible ranking, in its report "MarketScope for Local Government CRM Products." The report compares the offerings of nine providers of CRM commercial off-the-shelf software for local government agencies. Gartner notes that a provider receiving a Strong Positive ranking must be a "provider of strategic products, services or solutions..." and recommends that "customers continue with planned investments and potential customers consider this vendor a strong choice for strategic investments." "Local governments today face tough challenges as they are tasked with reducing costs while at the same time providing citizens with services and information more quickly and efficiently than ever before. Oracle is pleased to be recognized by Gartner with a Strong Positive rating in its 'MarketScope for Local Government CRM Products' report, as we believe it reflects our commitment to helping our public sector customers meet these challenges today and in the future," said Mark Johnson, senior vice president, Oracle Public Sector. Read the highlights.

    Read the article

  • What&rsquo;s new in Subtext 2.5: full-text search, related posts and more

    In Subtext 2.5 we changed the internal search provider from the like %term% SQL based one to a more mature and powerful one powered by Lucene.net. I wrote about how Lucene.net is implemented inside Subtext, but it didnt show the benefits for the users. In this post Im explaining the visible features of the full-text search. There are 4 places where the new Lucene.net based search engine has its effect: Full-text search Related links More Results for the search OpenSearch provider...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to design a scalable notification system?

    - by Trent
    I need to write a notification system manager. Here is my requirements: I need to be able to send a Notification on different platforms, which may be totally different (for exemple, I need to be able to send either an SMS or an E-mail). Sometimes the notification may be the same for all recipients for a given platform, but sometimes it may be a notification per recipients (or several) per platform. Each notification can contain platform specific payload (for exemple an MMS can contains a sound or an image). The system need to be scalable, I need to be able to send a very large amount of notification without crashing either the application or the server. It is a two step process, first a customer may type a message and choose a platform to send to, and the notification(s) should be created to be processed either real-time either later. Then the system needs to send the notification to the platform provider. For now, I end up with some though but I don't know how scalable it will be or if it is a good design. I've though of the following objects (in a pseudo language): a generic Notification object: class Notification { String $message; Payload $payload; Collection<Recipient> $recipients; } The problem with the following objects is what if I've 1.000.000 recipients ? Even if the Recipient object is very small, it'll take too much memory. I could also create one Notification per recipient, but some platform providers requires me to send it in batch, meaning I need to define one Notification with several Recipients. Each created notification could be stored in a persistent storage like a DB or Redis. Would it be a good it to aggregate this later to make sure it is scalable? On the second step, I need to process this notification. But how could I distinguish the notification to the right platform provider? Should I use an object like MMSNotification extending an abstract Notification? or something like Notification.setType('MMS')? To allow to process a lot of notification at the same time, I think a messaging queue system like RabbitMQ may be the right tool. Is it? It would allow me to queue a lot of notification and have several worker to pop notification and process them. But what if I need to batch the recipients as seen above? Then I imagine a NotificationProcessor object for which I could I add NotificationHandler each NotificationHandler would be in charge to connect the platform provider and perform notification. I can also use an EventManager to allow pluggable behavior. Any feedbacks or ideas? Thanks for giving your time. Note: I'm used to work in PHP and it is likely the language of my choice.

    Read the article

  • Scipy Negative Distance? What?

    - by disappearedng
    I have a input file which are all floating point numbers to 4 decimal place. i.e. 13359 0.0000 0.0000 0.0001 0.0001 0.0002` 0.0003 0.0007 ... (the first is the id). My class uses the loadVectorsFromFile method which multiplies it by 10000 and then int() these numbers. On top of that, I also loop through each vector to ensure that there are no negative values inside. However, when I perform _hclustering, I am continually seeing the error, "Linkage Z contains negative values". I seriously think this is a bug because: I checked my values, the values are no where small enough or big enough to approach the limits of the floating point numbers and the formula that I used to derive the values in the file uses absolute value (my input is DEFINITELY right). Can someone enligten me as to why I am seeing this weird error? What is going on that is causing this negative distance error? ===== def loadVectorsFromFile(self, limit, loc, assertAllPositive=True, inflate=True): """Inflate to prevent "negative" distance, we use 4 decimal points, so *10000 """ vectors = {} self.winfo("Each vector is set to have %d limit in length" % limit) with open( loc ) as inf: for line in filter(None, inf.read().split('\n')): l = line.split('\t') if limit: scores = map(float, l[1:limit+1]) else: scores = map(float, l[1:]) if inflate: vectors[ l[0]] = map( lambda x: int(x*10000), scores) #int might save space else: vectors[ l[0]] = scores if assertAllPositive: #Assert that it has no negative value for dirID, l in vectors.iteritems(): if reduce(operator.or_, map( lambda x: x < 0, l)): self.werror( "Vector %s has negative values!" % dirID) return vectors def main( self, inputDir, outputDir, limit=0, inFname="data.vectors.all", mappingFname='all.id.features.group.intermediate'): """ Loads vector from a file and start clustering INPUT vectors is { featureID: tfidfVector (list), } """ IDFeatureDic = loadIdFeatureGroupDicFromIntermediate( pjoin(self.configDir, mappingFname)) if not os.path.exists(outputDir): os.makedirs(outputDir) vectors = self.loadVectorsFromFile( limit, pjoin( inputDir, inFname)) for threshold in map( lambda x:float(x)/30, range(20,30)): clusters = self._hclustering(threshold, vectors) if clusters: outputLoc = pjoin(outputDir, "threshold.%s.result" % str(threshold)) with open(outputLoc, 'w') as outf: for clusterNo, cluster in clusters.iteritems(): outf.write('%s\n' % str(clusterNo)) for featureID in cluster: feature, group = IDFeatureDic[featureID] outline = "%s\t%s\n" % (feature, group) outf.write(outline.encode('utf-8')) outf.write("\n") else: continue def _hclustering(self, threshold, vectors): """function which you should call to vary the threshold vectors: { featureID: [ tfidf scores, tfidf score, .. ] """ clusters = defaultdict(list) if len(vectors) > 1: try: results = hierarchy.fclusterdata( vectors.values(), threshold, metric='cosine') except ValueError, e: self.werror("_hclustering: %s" % str(e)) return False for i, featureID in enumerate( vectors.keys()):

    Read the article

  • Mapstraction: Changing an Icon's image URL after it has been added?

    - by Paul Owens
    I am trying to use marker.setIcon() to change a markers image. However it appears that although this changes the marker.iconUrl attribute the icon itself is using marker.proprietary_marker.$.icon.image to display the markers image - so the markers icon remains unchanged. Is there a way to dynamically change the marker.proprietary_marker.$.icon.image? Add a marker. Check the icon's image url and the proprietary icon's image - they're the same. Change the icon. Again check the Urls. Now the Icon Url has changed but the marker still shows the old image which is in the proprietary marker object. <head> <title>Map Test</title> <script src="http://maps.google.com/maps?file=api&v=2&key=Your-Google-API-Key" type="text/javascript"></script> <script src="mapstraction.js"></script> <script type="text/javascript"> var map; var marker; function getMap(){ map = new mxn.Mapstraction('myMap','google'); map.setCenterAndZoom(new mxn.LatLonPoint(45.559242,-122.636467), 15); } function addMarker(){ marker = new mxn.Marker(new mxn.LatLonPoint(45.559242, -122.636467)); marker.addData({infoBubble : "Text", label : "Label", marker : 4, icon: "http://mapscripting.com/examples/mashups/richter-high.png"}); map.addMarker(marker); } function changeIcon(){ marker.setIcon("http://assets1.mapufacture.com/images/markers/usgs_marker.png"); } function showIconURL(){ alert(marker.iconUrl); } function showProprietaryIconURL(){ alert(marker.proprietary_marker.$.icon.image); } </script> </head> <body onload="getMap()"> <div id="myMap" style="width:627px; height:412px;"></div> <div> <input type="button" value="add marker" OnClick="addMarker();"> <input type="button" value="change icon" OnClick="changeIcon();"> <input type="button" value="show icon URL" OnClick="showIconURL();"> <input type="button" value="show proprierty icon URL " OnClick="showProprietaryIconURL();"> </div> </body> </html>

    Read the article

  • Improve email Delivery Rates

    - by JMC
    I have a web server that sends legitimate transactional email in high quantities. A reasonable percentage of users report that they never receive the emails. For every message sent, there's also a blind carbon copy going to an unfiltered email box on a different provider that I review to ensure the server actually sent the emails. All of the emails make it to my bcc box, so the server is sending the emails properly. It seems to be a spam filtering problem at other email providers. The hosting provider for the web server indicates a reverse dns lookup has been set at their level linking the emails ip address properly to my server and domain. Question: Is there anything else I can do to improve the rate that 3rd party service providers are filtering the emails I'm sending? Is there anything I can set on the DNS that I control to show that the server sending the emails is legitimate?

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer

    - by Elton Stoneman
    This is the second in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Part 2 is nice and easy. From Part 1 we exposed our service over the Azure Service Bus Relay using the netTcpRelayBinding and verified we could set up our network to listen for relayed messages. Assuming we want to consume that service in .NET from an environment which is fairly unrestricted for us, but quite restricted for attackers, we can use netTcpRelay and shared secret authentication. Pattern applicability This is a good fit for scenarios where: the consumer can run .NET in full trust the environment does not restrict use of external DLLs the runtime environment is secure enough to keep shared secrets the service does not need to know who is consuming it the service does not need to know who the end-user is So for example, the consumer is an ASP.NET website sitting in a cloud VM or Azure worker role, where we can keep the shared secret in web.config and we don't need to flow any identity through to the on-premise service. The service doesn't care who the consumer or end-user is - say it's a reference data service that provides a list of vehicle manufacturers. Provided you can authenticate with ACS and have access to Service Bus endpoint, you can use the service and it doesn't care who you are. In this post, we’ll consume the service from Part 1 in ASP.NET using netTcpRelay. The code for Part 2 (+ Part 1) is on GitHub here: IPASBR Part 2 Authenticating and authorizing with ACS In this scenario the consumer is a server in a controlled environment, so we can use a shared secret to authenticate with ACS, assuming that there is governance around the environment and the codebase which will prevent the identity being compromised. From the provider's side, we will create a dedicated service identity for this consumer, so we can lock down their permissions. The provider controls the identity, so the consumer's rights can be revoked. We'll add a new service identity for the namespace in ACS , just as we did for the serviceProvider identity in Part 1. I've named the identity fullTrustConsumer. We then need to add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus (see Part 1 for a walkthrough creating Service Idenitities): Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: fullTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send This sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. Adding a Service Reference The Part 2 sample client code is ready to go, but if you want to replicate the steps, you’re going to add a WSDL reference, add a reference to Microsoft.ServiceBus and sort out the ServiceModel config. In Part 1 we exposed metadata for our service, so we can browse to the WSDL locally at: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc?wsdl If you add a Service Reference to that in a new project you'll get a confused config section with a customBinding, and a set of unrecognized policy assertions in the namespace http://schemas.microsoft.com/netservices/2009/05/servicebus/connect. If you NuGet the ASB package (“windowsazure.servicebus”) first and add the service reference - you'll get the same messy config. Either way, the WSDL should have downloaded and you should have the proxy code generated. You can delete the customBinding entries and copy your config from the service's web.config (this is already done in the sample project in Sixeyed.Ipasbr.NetTcpClient), specifying details for the client:     <client>       <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                 behaviorConfiguration="SharedSecret"                 binding="netTcpRelayBinding"                 contract="FormatService.IFormatService" />     </client>     <behaviors>       <endpointBehaviors>         <behavior name="SharedSecret">           <transportClientEndpointBehavior credentialType="SharedSecret">             <clientCredentials>               <sharedSecret issuerName="fullTrustConsumer"                             issuerSecret="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/>             </clientCredentials>           </transportClientEndpointBehavior>         </behavior>       </endpointBehaviors>     </behaviors>   The proxy is straight WCF territory, and the same client can run against Azure Service Bus through any relay binding, or directly to the local network service using any WCF binding - the contract is exactly the same. The code is simple, standard WCF stuff: using (var client = new FormatService.FormatServiceClient()) { outputString = client.ReverseString(inputString); } Running the sample First, update Solution Items\AzureConnectionDetails.xml with your service bus namespace, and your service identity credentials for the netTcpClient and the provider:   <!-- ACS credentials for the full trust consumer (Part2): -->   <netTcpClient identityName="fullTrustConsumer"                 symmetricKey="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/> Then rebuild the solution and verify the unit tests work. If they’re green, your service is listening through Azure. Check out the client by navigating to http://localhost:53835/Sixeyed.Ipasbr.NetTcpClient. Enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • plot markers on google maps with json and jquery

    - by mark
    I am trying to plot the markers as defined in a json file om Google Maps but they don't show on the map. Can somebody help me with this problem? This is the Json file: http://sionvalais.com/gmap/markers/ This is the Javascritp function: function loadMarkers() { var bounds = map.getBounds(); var zoomLevel = map.getZoom(); $.post("/gmaps/markers/index.php", {zoom: zoomLevel, swLat: bounds.getSouthWest().lat(), swLon: bounds.getSouthWest().lng(), neLat: bounds.getNorthEast().lat(), neLon: bounds.getNorthEast().lng()}, function(data) { processMarkers(data, _smallMarkerSize); }, "json" ); } function processMarkers(webcams, markerSize) { var marker = null; var markersInView = new Array(); var idsInView = new Array(); // Loop through the new webcams for (var i = 0; i < webcams.length; i++) { var idx = markers.indexOf(webcams[i].id); if (idx == -1) { var info_html = "<table class='infowindow'>"; info_html += "<tr><td class='img'>"; info_html += "<img src='" + webcams[i].smallimg + "' /><td>"; info_html += "<td><p><b>" + webcams[i].loc + "</b>"; info_html += "<br /><a href='/webcam/" + webcams[i].url + "' target='_blank'>Show webcam</a></p></td></tr>"; info_html += "</table>"; marker = new WebcamMarker(new GLatLng(webcams[i].latitude, webcams[i].longitude), {image: "" + webcams[i].smallimg + "", height: markerSize, width: markerSize}); marker.myhtml = info_html; map.addOverlay(marker); markersInView[webcams[i].id] = marker; } else { markersInView[webcams[i].id] = markers[webcams[i].id]; } idsInView.push(webcams[i].id); } // Now remove the markers outside of the viewport for (var i = 0; i < webcamids.length; i++) { var idx = markersInView.indexOf(webcamids[i]); if (idx == -1) { marker = markers[webcamids[i]]; map.removeOverlay(marker); } } markers = markersInView; webcamids = idsInView; }

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >