Search Results

Search found 31200 results on 1248 pages for 'field service'.

Page 135/1248 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Creating DescriptionAttribute on Enumeration Field using System.Reflection.Emit

    - by Manish Sinha
    I have a list of strings which are candidates for Enumerations values. They are Don't send diffs 500 lines 1000 lines 5000 lines Send entire diff The problem is that spaces, special characters are not a part of identifiers and even cannot start with a number, so I would be sanitizing these values to only chars, numbers and _ To keep the original values I thought of putting these strings in the DescriptionAttribute, such that the final Enum should look like public enum DiffBehvaiour { [Description("Don't send diffs")] Dont_send_diffs, [Description("500 lines")] Diff_500_lines, [Description("1000 lines")] Diff_1000_lines, [Description("5000 lines")] Diff_5000_lines, [Description("Send entire diff")] Send_entire_diff } Then later using code I will retrieve the real string associated with the enumeration value, so that the correct string can be sent back the web service to get the correct resource. I want to know how to create the DescriptionAttribute using System.Reflection.Emit Basically the question is where and how to store the original string so that when the Enumeration value is chosen, the corresponding value can be retrieved. I am also interested in knowing how to access DescriptionAttribute when needed.

    Read the article

  • Apple Push notifications server - Feedback always returns zero tuples

    - by Franz
    Hi, I am developing an iPhone App that uses Apple Push Notifications. On the iPhone side everything is fine, on the server side I have a problem. Notifications are sent correctly however when I try to query the feedback service to obtain a list of devices from which the App has been uninstalled, I always get zero results. I know that I should obtain one result as the App has been uninstalled from one of my test devices. After 24 hours and more I still have no results from the feedback service.. Any ideas? Does anybody know how long it takes for the feedback service to recognize that my App has been uninstalled from my test device? Could it be due to the sandbox environment?

    Read the article

  • Debugging of native code

    - by graham.reeds
    I have a C# Service that is calling a C DLL that was originally written in VC6. There is a bug in the DLL which I am trying to inspect. After having a nightmare trying to get debug to work I eventually added the dll to the VS2005 solution containing the C# Service and added the necessary _CRT_SECURE_NO_WARNINGS. The debug version of the service is registered using 'installutil.exe' tool. I can get the debugger to break just before the line where the dll is entered via a call to System.Diagnostics.Debugger.Break();. I found some instruction on the net regarding stepping into debugging unmanaged code, and enabled the 'Enable unmanaged code debugging' check box, I've also tried turning on the Options-Debugging-Native 'Load DLL exports' and 'Enable RPC Debugging' (even though it's not COM). I've also copied the debug dll and .pdb to the same bin directory as the However the unmanaged code is not being stepped into which is what I really need. UPDATE: I found the Debugging Type in the DLL properties and set it to 'Mixed' as per suggestion on several sites but to no avail.

    Read the article

  • Thread used for ServiceConnection callback (Android)

    - by Jannick
    Hi I'm developing an activity that binds to a local service (in onCreate of the activity): bindService(new Intent(this, CommandService.class), svcConn, BIND_AUTO_CREATE); I would like to be able to call methods through the IBinder in my lifecycle methods, but can not be sure that onServiceConnected have been called prior to these. I'm thinking of handling this by adding a queue of sorts in the ServiceConnection implementation, so that the method calls (Command pattern) will be executed once the connection is established. My questions are then: Is this stupid, any better ways? :) Are there any specification for which thread will be used to execute the ServiceConnection callbacks? More to the point, do I need to worry about synchronizing a queue datastructure? Edit - something like: public void onServiceConnected(ComponentName name, IBinder service) { dispatchService = (DispatchAsync)service; for(ExecutionTask task : queue){ dispatchService.execute(task.getCommand(), task); } }

    Read the article

  • How to pass Remote Interface (aidl) throughout Activities ?

    - by Spredzy
    Hi All, I am developing an application using services and Remote interface. I have a question about passing the reference of my Remote interface throughout Activities. In my first Activity, I bind my service with my activity, in order to get a reference to my interface I use private ServiceConnection mConnection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName arg0, IBinder service) { x = X.Stub.asInterface(service); } @Override public void onServiceDisconnected(ComponentName arg0) { // TODO Auto-generated method stub } }; x being the reference to my interface. Now I would like to access this interface from another activity, I see two ways to do it but I don't know which one is the "proper" way to do it : passing x with my intent when I call the new Activity redo this.bindService(new Intent(y.this,z.class), mConnection, Context.BIND_AUTO_CREATE); in the onCreate() of my new Activity What would you advice me to do ?

    Read the article

  • Joomla , forms with upload and custom field from inside the administration panel

    - by Stathis
    I want a plugin for joomla like jforms or chronoforms in order to make a form to upload videos along with other custom fields to db and manage them. The only problem is I want this functionality to be made from inside the administrator console and not to appear on a page at my site's frontend. My site does not have a login service , so I need to make the admin able to login to administration panel and from there to upload and manage videos. Do you know of a plugin wich supports this functionality? Thank you in advance.

    Read the article

  • Not able to access any mothod with the attribute [webinvoke ] in WCF Restful services

    - by user1335978
    I am not able to access any method with the attribute [webinvoke] in a RESTful WCF service. My code is like this: [OperationContract] [WebInvoke(Method = "Post", UriTemplate = "Comosite/{composite}", ResponseFormat = WebMessageFormat.Xml)] CompositeType GetDataUsingDataContract(string composite); On executing the above service I am getting an error message Method not allowed. I tried many ways, by modifying the urltemplate, method name and method type etc. but nothing is working out. But if I use the [WebGet] attribute the the service method is working fine. Can anybody suggest me what can I do make it work? Thanks in advance... :)

    Read the article

  • In web project can we write core services layer without knowledge of UI ?

    - by Silent Warrior
    I am working on web project. We are using flex as UI layer. My question is often we are writing core service layer separately from web/UI layer so we can reuse same services for different UI layer/technology. So practically is it possible to reuse same core layer services without any changes/addition in API with different kind of UI technologies/layers. For e.g. same core service layer with UI technology which supports synchronized request response (e.g. jsp etc.) and non synchronize or event driven UI technology (e.g Ajax, Flex, GWT etc.) or with multiple devices like (computers, mobiles, pdas etc.). Personally I feel its very tough to write core service layer without any knowledge of UI. Looking for thoughts from other people.

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • jquery parseFloat assigning val to field

    - by user306472
    I have a select box that gives a description of a product along with a price. Depending on what the user selects, I'd like to automatically grab that dollar amount from the option selected and assign it to a price input field. My HTML: <tr> <td> <select class="selector"> <option value="Item One $500">Item One $500</option> <option value="Item Two $400">Item Two $400</option> </select> </td> <td> <input type="text" class="price"></input> </td> </tr> So based on what is selected, I want either 500 or 400 assigned to the .class input. I tried this but I'm not quite sure where I'm going wrong: $('.selector').blur(function(){ var selectVal = ('.selector > option.val()'); var parsedPrice = parseFloat(selectVal.val()); $('.price').val(parsedPrice); });

    Read the article

  • Rally Rest .NET API throws KeyNotFoundException when posting a new defect without required field value

    - by Triet Pham
    I've tried to post a new defect to Rally via Rest .net api by the following code: var api = new RallyRestApi("<myusername>", "<mypassword>", "https://community.rallydev.com"); var defect = new DynamicJsonObject(); defect["Name"] = "Sample Defect"; defect["Description"] = "Test posting defect without required field value"; defect["Project"] = "https://trial.rallydev.com/slm/webservice/1.29/project/5808130051.js"; defect["SubmittedBy"] = "https://trial.rallydev.com/slm/webservice/1.29/user/5797741589.js"; defect["ScheduleState"] = "In-Progress"; defect["State"] = "Open"; CreateResult creationResult = api.Create("defect", defect); But the api throws a weird exception: System.Collections.Generic.KeyNotFoundException was unhandled Message=The given key was not present in the dictionary. Source=mscorlib StackTrace: at System.Collections.Generic.Dictionary`2.get_Item(TKey key) at Rally.RestApi.DynamicJsonObject.GetMember(String name) at Rally.RestApi.DynamicJsonObject.TryGetMember(GetMemberBinder binder, Object& result) at CallSite.Target(Closure , CallSite , Object ) at System.Dynamic.UpdateDelegates.UpdateAndExecute1[T0,TRet](CallSite site, T0 arg0) at Rally.RestApi.RallyRestApi.Create(String type, DynamicJsonObject obj) at RallyIntegrationSample.Program.Main(String[] args) in D:\Projects\qTrace\References\Samples\RallyIntegrationSample\Program.cs:line 24 The problem is when i checked out the trace log file of Rally, it showed exactly what wrong in the posting request: Rally.RestApi Post Response: { "CreateResult": { "_rallyAPIMajor":"1", "_rallyAPIMinor":"29", "Errors":["Validation error: Defect.Severity should not be null"], "Warnings":[] } } Instead of given the proper CreateResult object with according errors information in its property, the Rally Rest .Net Api throws an unexpected exception. Is that a mistake in Rally rest .net api or should i do any extra steps to get the CreatResult seamlessly in case of any errors returned by Rally service? Many thanks for your helps.

    Read the article

  • How to get the millisecond value from a Timestamp field in firebird with Delphi 2007

    - by Re0sless
    I have a Firebird database (running on server version 2.1.3) and am connecting to it with Delphi 2007 using the DBExpress objects (using the Interbase driver) One of my tables in the database looks something like this CREATE TABLE MYTABLE ( MYDATE Timestamp NOT NULL, MYINDEX Integer NOT NULL, ... Snip ... PRIMARY KEY (MYDATE ,MYINDEX) ); I can add to the table OK, and in Flame Robin it shows the timestamp field as having a millisecond value. But when I do a select all (select * from MYTABLE) on the table I can not get the millisecond value, as it is always returned as 000. This causes major problems as it is part of the primary key (unfortunately I didn't design the table and don't have authority to change it). I have tried the following to get the millisecond value: sql1.fieldbyname('MYDATE').AsDateTime; sql1.fieldbyname('MYDATE').AsSQLTimeStamp; sql1.fieldbyname('MYDATE').AsStirng; sql1.fieldbyname('MYDATE').AsFloat; But they all return 14/09/2009 14:25:06.000 when formatted. How do I retrieve the millisecond from a timestamp? UPDATE: In case this helps anyone in the future, here are the drivers I tried for DBExpress and the results. Embarcadero - dbExpress Driver for Firebird (Delphi 2010 Trial Version) - Milliseconds not supported in timestamps. Chau Chee Yang's - dbExpress Driver for Firebird (Delphi 2007) - Milliseconds not supported in timestamps. UpScene - InterXpress for Firebird (Delphi 2007) - Milliseconds are supported in timestamps. DevArt - dbExpress Driver for InterBase (Delphi 2007) - Milliseconds are supported in timestamps.

    Read the article

  • Sharepoint OLE DB - field not updateable

    - by Pandincus
    I need to write a simple C# .NET application to retrieve, update, and insert some data in a Sharepoint list. I am NOT a Sharepoint developer, and I don't have control over our Sharepoint server. I would prefer not to have to develop this in a proper sharepoint development environment simply because I don't want to have to deploy my application on the Sharepoint server -- I'd rather just access data externally. Anyway, I found out that you can access Sharepoint data using OLE DB, and I tried it successfully using some ADO.NET: var db = DatabaseFactory.CreateDatabase(); DataSet ds = new DataSet(); using (var command = db.GetSqlStringCommand("SELECT * FROM List")) { db.LoadDataSet(command, ds, "List"); } The above works. However, when I try to insert: using (var command = db.GetSqlStringCommand("INSERT INTO List ([HeaderName], [Description], [Number]) VALUES ('Blah', 'Blah', 100)")) { db.ExecuteNonQuery(command); } I get this error: Cannot update 'HeaderName'; field not updateable. I did some Googling and apparently you cannot insert data through OLE DB! Does anyone know if there are some possible workarounds? I could try using Sharepoint Web Services, but I tried that initially and was having a heck of a time authenticating. Is that my only option?

    Read the article

  • jQuery changing fields to substring of related field

    - by Katherine
    Another jquery calculation question. I've this, which is sample code from the plugin site that I am playing with to get this working: function recalc(){ $("[id^=total_item]").calc( "qty * price", { qty: $("input[name^=qty_item_]"), price: $("input[name^=price_item_]"), }, function (s){ return s.toFixed(2);}, function ($this){ var sum = $this.sum(); $("#grandTotal").val( // round the results to 2 digits sum.toFixed(2) ); } ); } Changes in the price fields cause the totals to update: $("input[name^=price_item_]").bind("keyup", recalc); recalc(); Problem is I won't know the value of the price fields, they will be available to me only as a substring of values entered by the user, call it 'itemcode'. There will be a variable number of items, based on a php query. I've come up with this to change the price based on the itemcode: $("input[name^='itemcode_item_1']").keyup(function () { var codeprice = this.value.substring(2,6); $("input[name^='price_item_1']").val(codeprice); }); $("input[name^='itemcode_item_2']").keyup(function () { var codeprice = this.value.substring(2,6); $("input[name^='price_item_2']").val(codeprice); }); However while it does that, it also stops the item_total from updating. Also, I feel there must be a way to not need to write a numbered function for each item on the list. However when I just use $("input[name^='itemcode_item_']") updating any itemcode field updates all price fields, which is not good. Can anyone point me in the right direction? I know I am a bit clueless here, but javascript of any kind is not my thing.

    Read the article

  • Left Join with a OneToOne field in Django

    - by jamida
    I have 2 tables, simpleDB_all and simpleDB_some. The "all" table has an entry for every item I want, while the "some" table has entries only for some items that need additional information. The Django models for these are: class all(models.Model): name = models.CharField(max_length=40) important_info = models.CharField(max_length=40) class some(models.Model): all_key = models.OneToOneField(all) extra_info = models.CharField(max_length=40) I'd like to create a view that shows every item in "all" with the extra info if it exists in "some". Since I'm using a 1-1 field I can do this with almost complete success: allitems = all.objects.all() for item in allitems: print item.name, item.important_info, item.some.extra_info but when I get to the item that doesn't have a corresponding entry in the "some" table I get a DoesNotExist exception. Ideally I'd be doing this loop inside a template, so it's impossible to wrap it around a "try" clause. Any thoughts? I can get the desired effect directly in SQL using a query like this: SELECT all.name, all.important_info, some.extra_info FROM all LEFT JOIN some ON all.id = some.all_key_id; But I'd rather not use raw SQL.

    Read the article

  • How to retain XML string as a string field during XML deserialization

    - by detale
    I got an XML input string and want to deserialize it to an object which partially retain the raw XML. <SetProfile> <sessionId>A81D83BC-09A0-4E32-B440-0000033D7AAD</sessionId> <profileDataXml> <ArrayOfProfileItem> <ProfileItem> <Name>Pulse</Name> <Value>80</Value> </ProfileItem> <ProfileItem> <Name>BloodPresure</Name> <Value>120</Value> </ProfileItem> </ArrayOfProfileItem> </profileDataXml> </SetProfile> The class definition: public class SetProfile { public Guid sessionId; public string profileDataXml; } I hope the deserialization syntax looks like string inputXML = "..."; // the above XML XmlSerializer xs = new XmlSerializer(typeof(SetProfile)); using (TextReader reader = new StringReader(inputXML)) { SetProfile obj = (SetProfile)xs.Deserialize(reader); // use obj .... } but XMLSerializer will throw an exception and won't output < profileDataXml 's descendants to "profileDataXml" field in raw XML string. Is there any way to implement the deserialization like that?

    Read the article

  • Add newline to a text field (Win32)

    - by user146780
    I'm making a Notepad clone. Right now my text loads fine but where their are newline characters, they do not make newlines in the text field. I load it like this: void LoadText(HWND ctrl,HWND parent) { int leng; char buf[330000]; char FileBuffer[500]; memset(FileBuffer,0,500); FileBuffer[0] = '*'; FileBuffer[1] = '.'; FileBuffer[2] = 't'; FileBuffer[3] = 'x'; FileBuffer[4] = 't'; OPENFILENAMEA ofn; memset(&ofn, 0, sizeof(OPENFILENAMEA)); ofn.lStructSize = sizeof(OPENFILENAMEA); ofn.hwndOwner = parent; ofn.lpstrFile = FileBuffer; ofn.nMaxFile = 500; ofn.lpstrFilter = "Filetype (*.txt)\0\0"; ofn.lpstrDefExt = "txt"; ofn.Flags = OFN_EXPLORER; if(!GetOpenFileNameA(&ofn)) { return; } ifstream *file; file = new ifstream(FileBuffer,ios::in); int lenn; lenn = 0; while (!file->eof()) { buf[lenn] = file->get(); lenn += 1; } buf[lenn - 1] = 0; file->read(buf,lenn); SetWindowTextA(ctrl,buf); file->close(); } How can I make it do the new line characters? Thanks

    Read the article

  • How to do Grouping using JPA annotation with mapping given field

    - by hemal
    I am using JPA Annotation mapping with the table given below, but having problem that i am doing mapping on same table but on diffrent field given ProductImpl.java @Entity @Table(name = "Product") public class ProductImpl extends SimpleTagGroup implements Product { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private long id = -1; @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name = "ProductTagMapping", joinColumns =@JoinColumn(name = "productId"), inverseJoinColumns =@JoinColumn(name = "tagId")) private List<SimpleTag> tags; @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name = "ProductTagMapping", joinColumns =@JoinColumn(name = "productId"), inverseJoinColumns =@JoinColumn(name = "tagId")) private List<SimpleTag> licenses; @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name = "ProductTagMapping", joinColumns =@JoinColumn(name = "productId"), inverseJoinColumns =@JoinColumn(name = "tagId")) private List<SimpleTag> os; I want to get values like windows and linux in os , GPLv2 and GPLv3 in licenses ,so we are using TagGroup table . but here i got all tagValues in each of the os,licenses and tag fileds,so how could i do group by or some other things with JPA. and ProductTagMapping is the mapping table between Tag and TagGroup TagGroup Table ID TAGGROUPNAME 1 PRODUCTTYPE 2 LICENSE 3 TAGS 4 OS SimpleTag ID TAGVALUE 1 Application 2 Framework 3 Apache2 4 GPLv2 5 GPLv3 6 learning 7 Linux 8 Windows 9 mature

    Read the article

  • Manually listing objects in Django (problem with field ordering)

    - by Chris
    I'm having some trouble figuring out the best/Djangoic way to do this. I'm creating something like an interactive textbook. It has modules, which are more or less like chapters. Each module page needs to list the topics in that module, grouped into sections. My question is how I can ensure that they list in the correct order in the template? Specifically: 1) How to ensure the sections appear in the correct order? 2) How to ensure the topics appear in the correct order in the section? I imagine I could add a field to each model purely for the sake of ordering, but the problem with that is that a topic might appear in different modules, and in whatever section they are in there they would again have to be ordered somehow. I would probably give up and do it all manually were it not for the fact that I need to have the Topic as object in the template (or view) so I can mark it up according to how the user has labeled it. So I suppose my question is really to do with whether I should create the contents pages manually, or whether there is a way of ordering the query results in a way I haven't thought of. Thanks for your help!

    Read the article

  • The type of field isn't supported by declared persistence strategy "OneToMany"

    - by Robert
    We are new to JPA and trying to setup a very simple one to many relationship where a pojo called Message can have a list of integer group id's defined by a join table called GROUP_ASSOC. Here is the DDL: CREATE TABLE "APP"."MESSAGE" ( "MESSAGE_ID" INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1) ); ALTER TABLE "APP"."MESSAGE" ADD CONSTRAINT "MESSAGE_PK" PRIMARY KEY ("MESSAGE_ID"); CREATE TABLE "APP"."GROUP_ASSOC" ( "GROUP_ID" INTEGER NOT NULL, "MESSAGE_ID" INTEGER NOT NULL ); ALTER TABLE "APP"."GROUP_ASSOC" ADD CONSTRAINT "GROUP_ASSOC_PK" PRIMARY KEY ("MESSAGE_ID", "GROUP_ID"); ALTER TABLE "APP"."GROUP_ASSOC" ADD CONSTRAINT "GROUP_ASSOC_FK" FOREIGN KEY ("MESSAGE_ID") REFERENCES "APP"."MESSAGE" ("MESSAGE_ID"); Here is the pojo: @Entity @Table(name = "MESSAGE") public class Message { @Id @Column(name = "MESSAGE_ID") @GeneratedValue(strategy = GenerationType.IDENTITY) private int messageId; @OneToMany(fetch=FetchType.LAZY, cascade=CascadeType.PERSIST) private List groupIds; public int getMessageId() { return messageId; } public void setMessageId(int messageId) { this.messageId = messageId; } public List getGroupIds() { return groupIds; } public void setGroupIds(List groupIds) { this.groupIds = groupIds; } } When we try to execute the following test code we get <openjpa-1.2.3-SNAPSHOT-r422266:907835 fatal user error> org.apache.openjpa.util.MetaDataException: The type of field "pojo.Message.groupIds" isn't supported by declared persistence strategy "OneToMany". Please choose a different strategy. Message msg = new Message(); List groups = new ArrayList(); groups.add(101); groups.add(102); EntityManager em = Persistence.createEntityManagerFactory("TestDBWeb").createEntityManager(); em.getTransaction().begin(); em.persist(msg); em.getTransaction().commit(); Help!

    Read the article

  • iPhone: understanding field crash reports: unrecognized selector?

    - by quixoto
    Hi all, A user of my app out in the field seems to having bad crash-at-app-start issues. I got him to send me the .crash files from his PC. After "symbolicating" them according to this article, I get what looks from the stack like a unrecognized selector fail. But the top line of code corresponding to my process is an unambiguous message send that gets executed hundreds of times without issue in my app normally. Needless to say, I never repro this issue myself. Can the crash report lie? Could this stack indicate anything besides unrecognized selector? Thanks for any insight. Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x00000000, 0x00000000 Crashed Thread: 0 Thread 0 Crashed: 0 libSystem.B.dylib 0x000790a0 __kill + 8 1 libSystem.B.dylib 0x00079090 kill + 4 2 libSystem.B.dylib 0x00079082 raise + 10 3 libSystem.B.dylib 0x0008d20a abort + 50 4 libstdc++.6.dylib 0x00044a1c __gnu_cxx::__verbose_terminate_handler() + 376 5 libobjc.A.dylib 0x000057c4 _objc_terminate + 104 6 libstdc++.6.dylib 0x00042dee __cxxabiv1::__terminate(void (*)()) + 46 7 libstdc++.6.dylib 0x00042e42 std::terminate() + 10 8 libstdc++.6.dylib 0x00042f12 __cxa_throw + 78 9 libobjc.A.dylib 0x000046a4 objc_exception_throw + 64 10 CoreFoundation 0x00094174 -[NSObject doesNotRecognizeSelector:] + 108 11 CoreFoundation 0x00093afa ___forwarding___ + 482 12 CoreFoundation 0x000306c8 _CF_forwarding_prep_0 + 40 13 MyAppProcess 0x000147c6 -[ImageLoader imageSmallForColor:style:] (ImageLoader.m:180) .... /* many more frames... */

    Read the article

  • Reduce size and change text color of the bootstrap-datepicker field

    - by Lisarien
    Programing an Android application based on HTML5/JQuery launching in a web view, I'm using the Eternicode's bootstrap-datepicker. BTW I'm using Jquery 1.9.1 and Bootstrap 2.3.2 with bootstrap-responsive. The picker is working fine but unfortunately I get yet two issues which I was not able to solve at this time: the picker theme is not respected and some dates are not readable (see the attached picture). Some there is css conflict such as the datepicker's font is displayed white on white background. I'm not able to reduce the size of the date picker field such as it holds on alone row. My markup code is: <div class="row-fluid" style="margin-right:0px!important"> <label id="dateId" class="span6">One Date</label> <div id="datePurchase" class="input-append date"> <input data-format="yyyy-MM-dd" type="text" /> <span class="add-on"> <i data-time-icon="icon-time" data-date-icon="icon-calendar"></i> </span> </div> </div> I init the datepicker by Javascript: $("#dateId").datetimepicker({ pickTime: false, format: "dd/MM/yyyy", startDate: startDate, endDate: endDate, autoclose: true }); $("#dateId").on("changeDate", onChangeDate); Do you get any idea about these issues? Thanks very much

    Read the article

  • How to use a int2 database-field as a boolean in Java using JPA/Hibernate

    - by mg
    Hello... I write an application based on an already existing database (postgreSQL) using JPA and Hibernate. There is a int2-column (activeYN) in a table, which is used as a boolean (0 = false (inactive), not 0 = true (active)). In the Java application i want to use this attribute as a boolean. So i defined the attribute like this: @Entity public class ModelClass implements Serializable { /*..... some Code .... */ private boolean active; @Column(name="activeYN") public boolean isActive() { return this.active; } /* .... some other Code ... */ } But there ist an exception because Hibernate expects an boolean database-field and not an int2. Can i do this mapping i any way while using a boolean in java?? I have a possible solution for this, but i don't really like it: My "hacky"-solution is the following: @Entity public class ModelClass implements Serializable { /*..... some Code .... */ private short active_USED_BY_JPA; //short because i need int2 /** * @Deprecated this method is only used by JPA. Use the method isActive() */ @Column(name="activeYN") public short getActive_USED_BY_JPA() { return this.active_USED_BY_JPA; } /** * @Deprecated this method is only used by JPA. * Use the method setActive(boolean active) */ public void setActive_USED_BY_JPA(short active) { this.active_USED_BY_JPA = active; } @Transient //jpa will ignore transient marked methods public boolean isActive() { return getActive_USED_BY_JPA() != 0; } @Transient public void setActive(boolean active) { this.setActive_USED_BY_JPA = active ? -1 : 0; } /* .... some other Code ... */ } Are there any other solutions for this problem? The "hibernate.hbm2ddl.auto"-value in the hibernate configuration is set to "validate". (sorry, my english is not the best, i hope you understand it anyway)..

    Read the article

  • Assigning a RecID field to Gridview TemplateField (Checbox column)

    - by user279521
    I want to assign a RecID to the checkbox column "cbPOID". The RecID field that is being returned in my dataset, but should not be displayed in the gridview. <asp:GridView ID="gvOrders" runat="server" AutoGenerateColumns="False" CellPadding="4" GridLines="None" Width="100%" AllowPaging="True" PageSize="20" onpageindexchanging="gvOrders_PageIndexChanging" ForeColor="#333333"> <Columns> <asp:TemplateField HeaderText="VerifiedComplete" > <ItemTemplate> <asp:CheckBox ID="cbPOID" runat="server"/> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="PurchaseOrderID" HeaderText="PurchaseOrderID" HtmlEncode="False" ></asp:BoundField> <asp:BoundField DataField="VENDOR_ID" HeaderText="Vendor ID"></asp:BoundField> <asp:BoundField DataField="VENDOR_NAME" HeaderText="Vendor Name"></asp:BoundField> <asp:BoundField DataField="ITEM_DESC" HeaderText="Item Desc"></asp:BoundField> <asp:BoundField DataField="SYS_DATE" HeaderText="System Date"></asp:BoundField> </Columns> <FooterStyle CssClass="GridFooter" BackColor="#990000" Font-Bold="True" ForeColor="White" /> <PagerStyle CssClass="GridPager" ForeColor="#333333" BackColor="#FFCC66" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#FFCC66" Font-Bold="True" ForeColor="Navy" /> <HeaderStyle CssClass="GridHeader" BackColor="#990000" Font-Bold="True" ForeColor="White" /> <RowStyle CssClass="GridItem" BackColor="#FFFBD6" ForeColor="#333333" /> <AlternatingRowStyle CssClass="GridAltItem" BackColor="White" /> </asp:GridView>

    Read the article

  • Serializing a DataType="time" field using XmlSerializer

    - by CraftyFella
    Hi, I'm getting an odd result when serializing a DateTime field using XmlSerializer. I have the following class: public class RecordExample { [XmlElement("TheTime", DataType = "time")] public DateTime TheTime { get; set; } [XmlElement("TheDate", DataType = "date")] public DateTime TheDate { get; set; } public static bool Serialize(Stream stream, object obj, Type objType, Encoding encoding) { try { using (var writer = XmlWriter.Create(stream, new XmlWriterSettings { Encoding = encoding })) { var xmlSerializer = new XmlSerializer(objType); if (writer != null) xmlSerializer.Serialize(writer, obj); } return true; } catch (Exception) { return false; } } } When i call the use the XmlSerializer with the following testing code: var obj = new RecordExample {TheDate = DateTime.Now.Date, TheTime = new DateTime(0001, 1, 1, 12, 00, 00)}; var ms = new MemoryStream(); RecordExample.Serialize(ms, obj, typeof (RecordExample), Encoding.UTF8); txtSource2.Text = Encoding.UTF8.GetString(ms.ToArray()); I get some strange results, here's the xml that is produced: <?xml version="1.0" encoding="utf-8"?> <RecordExample xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <TheTime>12:00:00.0000000+00:00</TheTime> <TheDate>2010-03-08</TheDate> </RecordExample> Any idea's how i can get the "TheTime" element to contain a time which looks more like this: <TheTime>12:00:00.0Z</TheTime> ...as that's what i was expecting? Thanks Dave

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >