Search Results

Search found 70827 results on 2834 pages for 'data quality services'.

Page 134/2834 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • High-quality ERD generator for PostgresQL under Linux?

    - by Dave Jarvis
    Background MySQL Workbench can produce appealing and high-quality ERDs such as: Research I have not found a tool that even comes close for PostgreSQL. Tools I have found: dbVisualizer - Yellow squares. AquaFold - Yellow squares. SQL Developer - Coloured squares. Dia - Coloured squares. SchemaBank - Can't export to PNG; looks okay, nothing stellar. SchemaSpy - XML export makes it possible to write an XSL skin... Gliffy - Incompatible Flash version. Druid - No. pgAdmin3 - Not applicable? phpPgAdmin - Couldn't login without a 30-minute configuration battle. Requirements Looking for an ERD tool: Visually stunning by default Can reverse-engineer a PostgreSQL (or JDBC-compliant) database Runs on Linux (or under WINE) Export high-resolution PNG (or SVG) FOSS

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • Java Cloud Service Integration using Web Service Data Control

    - by Jani Rautiainen
    Java Cloud Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance.This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance.In this article a custom application integrating with Fusion Application using Web Service Data Control will be implemented. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration. Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source. The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the “Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud” link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guide. For details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Create Application In this example the “JcsWsDemo” application created in the “Java Cloud Service Integration using Web Service Proxy” article is used as the base. Create Web Service Data Control In this example we will use a Web Service Data Control to integrate with Credit Rule Service in Fusion Applications. The data control will be used to query data from Fusion Applications using a web service call and present the data in a table. To generate the data control choose the “Model” project and navigate to "New -> All Technologies -> Business Tier -> Data Controls -> Web Service Data Control" and enter following: Name: CreditRuleServiceDC URL: https://ic-[POD].oracleoutsourcing.com/icCnSetupCreditRulesPublicService/CreditRuleService?WSDL Service: {{http://xmlns.oracle.com/apps/incentiveCompensation/cn/creditSetup/creditRule/creditRuleService/}CreditRuleService On step 2 select the “findRule” operation: Skip step 3 and on step 4 define the credentials to access the service. Do note that in this example these credentials are only used if testing locally, for JCS deployment credentials need to be manually updated on the EAR file: Click “Finish” and the proxy generation is done. Creating UI In order to use the data control we will need to populate complex objects FindCriteria and FindControl. For simplicity in this example we will create logic in a managed bean that populates the objects. Open “JcsWsDemoBean.java” and add the following logic: Map findCriteria; Map findControl; public void setFindCriteria(Map findCriteria) { this.findCriteria = findCriteria; } public Map getFindCriteria() { findCriteria = new HashMap(); findCriteria.put("fetchSize",10); findCriteria.put("fetchStart",0); return findCriteria; } public void setFindControl(Map findControl) { this.findControl = findControl; } public Map getFindControl() { findControl = new HashMap(); return findControl; } Open “JcsWsDemo.jspx”, navigate to “Data Controls -> CreditRuleServiceDC -> findRule(Object, Object) -> result” and drag and drop the “result” node into the “af:form” element in the page: On the “Edit Table Columns” remove all columns except “RuleId” and “Name”: On the “Edit Action Binding” window displayed enter reference to the java class created above by selecting “#{JcsWsDemoBean.findCriteria}”: Also define the value for the “findControl” by selecting “#{JcsWsDemoBean.findControl}”. Deploy to JCS For WS DC the authentication details need to be updated on the connection details before deploying. Open “connections.xml” by navigating “Application Resources -> Descriptors -> ADF META-INF -> connections.xml”: Change the user name and password entry from: <soap username="transportUserName" password="transportPassword" To match the access details for the target environment. Follow the same steps as documented in previous article ”Java Cloud Service ADF Web Application”. Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsWsDemo-ViewController-context-root/faces/JcsWsDemo.jspx When accessed the first 10 rules in the system are displayed: Summary In this article we learned how to integrate with Fusion Applications using a Web Service Data Control in JCS. In future articles various other integration techniques will be covered. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Is it Possible to Query Multiple Databases with WCF Data Services?

    - by Mas
    I have data being inserted into multiple databases with the same schema. The multiple databases exist for performance reasons. I need to create a WCF service that a client can use to query the databases. However from the client's point of view, there is only 1 database. By this I mean when a client performs a query, it should query all databases and return the combined results. I also need to provide the flexibility for the client to define its own queries. Therefore I am looking into WCF Data Services, which provides the very nice functionality for client specified queries. So far, it seems that a DataService can only make a query to a single database. I found no override that would allow me to dispatch queries to multiple databases. Does anyone know if it is possible for a WCF Data Service to query against multiple databases with the same schema?

    Read the article

  • Rreport / LaTeX quality output package

    - by aL3xa
    I'm looking for some LaTeX template for creating quality output. On R-bloggers I've bumped on Frank Harrel's Rreport package. Due to my, quite modest LaTeX abilities, only user-friendly (and noob-friendly) interface should suffice. Here's a link to an official website. I'm following the instructions, but I cannot manage to install an app. I use Ubuntu 9.10, R version is 2.10.1 (updated regularly from UCLA's CRAN server)... of course, cvs is installed on my sistem. Now, I'd like to know is there some user-friendly LaTeX template package (Sweave is still to advanced/spartan for me). I'm aware that my question is quite confounding, but brief glance on examples on Rreport page should give you a hint. I'm aware that LaTeX skills are a must, but just for now I need something that will suite my needs (as a psychological researcher). Is there any pandan for Rreport package?

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • C# - High Quality Byte Array Conversion of Images

    - by Lijo
    Hi Team, I am converting images to byte array and storing in a text file using the following code. I am retrieving them successfully as well. My concern is that the quality of the retrieved image is not up to the expectation. Is there a way to have better conversion to byte array and retrieving? I am not worried about the space conception. Please share your thoughts. string plaintextStoringLocation = @"D:\ImageSource\Cha5.txt"; string bmpSourceLocation = @"D:\ImageSource\Cha50.bmp"; ////Read image Image sourceImg = Image.FromFile(bmpSourceLocation); ////Convert to Byte[] byte[] clearByteArray = ImageToByteArray(sourceImg); ////Store it for future use (in plain text form) StoreToLocation(clearByteArray, plaintextStoringLocation); //Read from binary byte[] retirevedImageBytes = ReadByteArrayFromFile(plaintextStoringLocation); //Retrieve from Byte[] Image destinationImg = ByteArrayToImage(retirevedImageBytes); //Display Image pictureBox1.Image = destinationImg; Thanks Lijo

    Read the article

  • C++0x optimizing compiler quality

    - by aaa
    hello. I do some heavy numbercrunching and for me floating-point performance is very important. I like performance of Intel compiler very much and quite content with quality of assembly it produces. I am thinking at some point to try C++0x mainly for sugar parts, like auto, initializer list, etc, but also lambdas. at this point I use those features in regular C++ by the means of boost. How good of assembly code do compilers C++0x generate? specifically Intel and gcc compilers. Do they produce SSE code? is performance comparable to C++? are there any benchmarks? My Google search did not reveal much. Thank you.

    Read the article

  • Exchange 2010 POP3/IMAP4/Transport services complaining that they can't find SSL certificate after blue screen

    - by Graeme Donaldson
    We have a single-server Exchange 2010 setup. In the early hours of this morning the server had a blue screen and rebooted. After coming back up the POP3/IMAP4 and Transport services are complaining that they cannot find the correct SSL certificate for mail.example.com. POP3: Log Name: Application Source: MSExchangePOP3 Date: 2012/04/23 11:45:15 AM Event ID: 2007 Task Category: (1) Level: Error Keywords: Classic User: N/A Computer: exch01.domain.local Description: A certificate for the host name "mail.example.com" couldn't be found. SSL or TLS encryption can't be made to the POP3 service. IMAP4: Log Name: Application Source: MSExchangeIMAP4 Date: 2012/04/23 08:30:44 AM Event ID: 2007 Task Category: (1) Level: Error Keywords: Classic User: N/A Computer: exch01.domain.local Description: A certificate for the host name "mail.example.com" couldn't be found. Neither SSL or TLS encryption can be made to the IMAP service. Transport: Log Name: Application Source: MSExchangeTransport Date: 2012/04/23 08:32:27 AM Event ID: 12014 Task Category: TransportService Level: Error Keywords: Classic User: N/A Computer: exch01.domain.local Description: Microsoft Exchange could not find a certificate that contains the domain name mail.example.com in the personal store on the local computer. Therefore, it is unable to support the STARTTLS SMTP verb for the connector Default EXCH01 with a FQDN parameter of mail.example.com. If the connector's FQDN is not specified, the computer's FQDN is used. Verify the connector configuration and the installed certificates to make sure that there is a certificate with a domain name for that FQDN. If this certificate exists, run Enable-ExchangeCertificate -Services SMTP to make sure that the Microsoft Exchange Transport service has access to the certificate key. The odd part is that Get-ExchangeCertificate show the cert as enabled for all the relevant services, and OWA is working flawlessly using this certificate. [PS] C:\Users\graeme\Desktop>Get-ExchangeCertificate Thumbprint Services Subject ---------- -------- ------- XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ....S. CN=exch01 YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY ....S. CN=exch01 ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ IP.WS. CN=mail.example.com, OU=Domain Control Validated, O=mail.exa... Here's the certificate in the computer account's personal cert store: Does anyone have any pointers for getting POP3/IMAP4/SMTP to use the cert again?

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • Are there compact external USB audio interfaces which are better than a on-board sound?

    - by rumtscho
    I am asking this for a friend. He loves his voice recognition software and dictates a lot of text using a headset. Now he has a new laptop, which only has a combined mic/headphones output, and wanted to buy an adapter. I told him to get an external USB sound interface instead, as the better sound quality will probably increase the hit rate of the voice recognition. He agreed, but when he saw a picture of the SoundBlaster X-Fi, he said that it is way too big, because he wants to carry the thing everywhere. He'd rather have one of these small things which are the size of a flash memory stick, with only one mic and one phones output, period. Now I am not sure whether these mini interfaces would produce a sound better than onboard sound. They all seem to come not from established audio interface manufacturers, but from electronic accessories manufacturers like Speedlink, or just noname brands. Is there a compact audio interface with good A/D quality (it is OK if the price is comparable to that of the bigger interfaces, even if there is no additional functionality like Chinch in-/output etc)?. And if there isn't, will the noname soundcardsticks offer any advantage over a simple adaptor for the onboard sound?

    Read the article

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • Applying Interactive Sorting to Multiple Columns in Reporting Services

    - by smisner
    A nice feature that appeared first in SQL Server 2008 is the ability to allow the user to click a column header to sort that column. It defaults to an ascending sort first, but you can click the column again to switch to a descending sort. You can learn more about interactive sorts in general at the Adding Interactive Sort to a Data Region in Books Online. Not mentioned in the article is how to apply interactive sorting to multiple columns, hence the reason for this post! Let’s say that I have a simple table like this: To enable interactive sorting, I open the Text Box properties for each of the column headers – the ones in the top row. Here’s an example of how I set up basic interactive sorting: Now when I preview the report, I see icons appear in each text box on the header row to indicate that interactive sorting is enabled. The initial sort order that displays when you preview the report depends on how you design the report. In this case, the report sorts by Sales Territory Group first, and then by Calendar Year. Interactive sorting overrides the report design. So let’s say that I want to sort first by Calendar Year, and then by Sales Territory Group. To do this, I click the arrow to the right of Calendar Year, and then, while pressing the Shift key, I click the arrow to the right of Sales Territory Group twice (once for ascending order and then a second time for descending order). Now my report looks like this: This technique only seems to work when you have a minimum of three columns configured with interactive sorting. If I remove the property from one of the columns in the above example, and try to use the interactive sorting on the remaining two columns, I can sort only the first column. The sort on the second column gets ignored. I don’t know if that’s by design or a bug, but I do know that’s what I’m experiencing when I try it out!

    Read the article

  • VPN issue: SSTP Service service started and then stopped

    - by Ampersand
    When I was trying to set up a VPN connect on my laptop running Windows 7 Ultimate, I got this error: Network Connections Cannot load the Remote Access Connection Manager service. Error 711: The operation could not finish because it could not start the Remote Access Connection Manager service in time. Please try the operation again. I traced through some service dependencies and discovered that Secure Socket Tunneling Protocol Service was set to Manual. However, when I try to manually start the service, I get: Services The Secure Socket Tunneling Protocol Service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Setting all the services involved to Automatic did not help. SSTP just showed Automatic and Stopped in the Services panel. I found a solution that involved booting in Safe Mode and deleting the contents of C:\Windows\System32\LogFiles\WMI\RtBackup. This solution worked, and I could set up a vpn connection, but only until I rebooted again. TL;DR I'm looking for a way to permanently enable Secure Socket Tunneling Protocol Service and other vpn-related services permanently so I don't have to reboot into safe mode and delete files every time I need to connect to a vpn.

    Read the article

  • What is the correct approach i should use for an application that requires amazon S3 uploads and SimpleDB data management?

    - by Luis Oscar
    I am developing an application for iOS and that is going smoothly, the problem is that I am very new at server sided things. I am totally confused about how to correctly use Amazon Web Services for this purpose. What I want to do is very simple. I want my application to be able to query a servlet hosted in EC2 to be able to retrieve pictures and data based on some criteria from S3 and SImpleDB respectively. Also the application should be able to upload pictures into a S3 bucket and register the information in the SImpleDB. My main concerns are security and costs, So far i was using Amazon Token Vending Machine but I haven't been successful when trying to customize it, and while researching I discovered that on the long run it is very expensive. The ultimate goal is to handle a "social" picture service for my iOS application. Being able to register new users, authenticate these users. See what permissions they have to which pictures from the bucked. And all this without having to worry about Third party people from accessing the private pictures of my users. Sorry for this question but I am really clueless about how to handle this... I have tried reading many articles but all these server stuff looks very scary.

    Read the article

  • Problem adding "Network Policy and Access Services" role in Server 2008

    - by Django Reinhardt
    We are encountering an Error Code 0x80070643 when attempting to add the "Network Policy and Access Services" role on a fresh Windows Server 2008 R2 installation. Is there a known solution for this problem? Here is what information we have available so far: From ServerManager.log... 2504: 2009-11-23 11:12:01.712 [CBS] installing 'IAS NT Service RasServerAll RasRoutingProtocols ' ... 2504: 2009-11-23 11:12:01.911 [CBS] ...parents that will be auto-installed: 'RasServer ' 2504: 2009-11-23 11:12:01.912 [CBS] ...default children to turn-off: '<none>' 2504: 2009-11-23 11:12:01.924 [CBS] ...current state of 'IAS NT Service': p: Staged, a: Staged, s: UninstallRequested 2504: 2009-11-23 11:12:01.924 [CBS] ...setting state of 'IAS NT Service' to 'InstallRequested' 2504: 2009-11-23 11:12:01.935 [CBS] ...current state of 'RasServerAll': p: Staged, a: Staged, s: UninstallRequested 2504: 2009-11-23 11:12:01.935 [CBS] ...setting state of 'RasServerAll' to 'InstallRequested' 2504: 2009-11-23 11:12:01.946 [CBS] ...current state of 'RasRoutingProtocols': p: Staged, a: Staged, s: UninstallRequested 2504: 2009-11-23 11:12:01.946 [CBS] ...setting state of 'RasRoutingProtocols' to 'InstallRequested' 2504: 2009-11-23 11:12:01.956 [CBS] ...current state of 'RasServer': p: Staged, a: Staged, s: UninstallRequested 2504: 2009-11-23 11:12:01.956 [CBS] ...setting state of 'RasServer' to 'InstallRequested' 2504: 2009-11-23 11:12:01.967 [CBS] ...'IAS NT Service' : applicability: Applicable 2504: 2009-11-23 11:12:01.977 [CBS] ...'RasServerAll' : applicability: Applicable 2504: 2009-11-23 11:12:01.987 [CBS] ...'RasRoutingProtocols' : applicability: Applicable 2504: 2009-11-23 11:12:01.998 [CBS] ...'RasServer' : applicability: Applicable 2504: 2009-11-23 11:12:02.906 [CbsUIHandler] Initiate: 2504: 2009-11-23 11:12:02.906 [InstallationProgressPage] Installing... 2504: 2009-11-23 11:12:54.311 [CbsUIHandler] Error: -2147023293 : 2504: 2009-11-23 11:12:54.313 [CbsUIHandler] Terminate: 2504: 2009-11-23 11:12:54.316 [InstallationProgressPage] Verifying installation... 2504: 2009-11-23 11:12:54.326 [CBS] ...done installing 'IAS NT Service RasServerAll RasRoutingProtocols '. Status: -2147023293 (80070643) 2504: 2009-11-23 11:12:54.329 [NPAS] Skipped configuration of 'Network Policy Server' because install operation failed. 2504: 2009-11-23 11:12:54.330 [NPAS] Skipped configuration of 'Remote Access Service' because install operation failed. 2504: 2009-11-23 11:12:54.330 [NPAS] Skipped configuration of 'Routing' because install operation failed. 2504: 2009-11-23 11:12:54.330 [Provider] [STAT] ---- CBS Session Consolidation ----- [STAT] For 'Network Policy Server', 'Remote Access Service', 'Routing'[STAT] installation(s) took '52.616957' second(s) total. [STAT] Configuration(s) took '0.0004948' second(s) total. [STAT] Total time: '52.6174518' second(s). From System Event Viewer... Log Name: System Source: Service Control Manager Date: 23/11/2009 11:12:23 Event ID: 7023 Task Category: None Level: Error Keywords: Classic User: N/A Computer: Av7Analytics Description: The Network Policy Server service terminated with the following error: %%-2147013892 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Service Control Manager" Guid="{555908d1-a6d7-4695-8e1e-26931d2012f4}" EventSourceName="Service Control Manager" /> <EventID Qualifiers="49152">7023</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2009-11-23T11:12:23.653578500Z" /> <EventRecordID>1317</EventRecordID> <Correlation /> <Execution ProcessID="468" ThreadID="2308" /> <Channel>System</Channel> <Computer>Av7Analytics</Computer> <Security /> </System> <EventData> <Data Name="param1">Network Policy Server</Data> <Data Name="param2">%%-2147013892</Data> </EventData> </Event> From Setup Event Viewer... Log Name: Setup Source: Microsoft-Windows-ServerManager Date: 23/11/2009 11:12:56 Event ID: 1616 Task Category: None Level: Error Keywords: User: AV7ANALYTICS\RenamedAdmin Computer: Av7Analytics Description: Installation failed. Roles: Network Policy and Access Services Error: Attempt to install Network Policy Server failed with error code 0x80070643. Fatal error during installation Error: Attempt to install Remote Access Service failed with error code 0x80070643. Fatal error during installation Error: Attempt to install Routing failed with error code 0x80070643. Fatal error during installation The following role services were not installed: Network Policy Server Routing and Remote Access Services Remote Access Service Routing Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-ServerManager" Guid="{8C474092-13E4-430E-9F06-5B60A529BF38}" /> <EventID>1616</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x4000000000000000</Keywords> <TimeCreated SystemTime="2009-11-23T11:12:56.046431200Z" /> <EventRecordID>115</EventRecordID> <Correlation /> <Execution ProcessID="2504" ThreadID="2344" /> <Channel>Setup</Channel> <Computer>Av7Analytics</Computer> <Security UserID="S-1-5-21-2753803390-1569373846-1208217686-500" /> </System> <UserData> <EventXML xmlns:auto-ns3="http://schemas.microsoft.com/win/2004/08/events" xmlns="Event_NS"> <message> Roles: Network Policy and Access Services Error: Attempt to install Network Policy Server failed with error code 0x80070643. Fatal error during installation Error: Attempt to install Remote Access Service failed with error code 0x80070643. Fatal error during installation Error: Attempt to install Routing failed with error code 0x80070643. Fatal error during installation The following role services were not installed: Network Policy Server Routing and Remote Access Services Remote Access Service Routing </message> <identifiers>14, 206, 207, 208, 205</identifiers> </EventXML> </UserData> Thanks in advance for any help. It's quite shocking that we're already having problems with Microsoft's "latest and greatest".

    Read the article

  • Using jQuery to POST Form Data to an ASP.NET ASMX AJAX Web Service

    - by Rick Strahl
    The other day I got a question about how to call an ASP.NET ASMX Web Service or PageMethods with the POST data from a Web Form (or any HTML form for that matter). The idea is that you should be able to call an endpoint URL, send it regular urlencoded POST data and then use Request.Form[] to retrieve the posted data as needed. My first reaction was that you can’t do it, because ASP.NET ASMX AJAX services (as well as Page Methods and WCF REST AJAX Services) require that the content POSTed to the server is posted as JSON and sent with an application/json or application/x-javascript content type. IOW, you can’t directly call an ASP.NET AJAX service with regular urlencoded data. Note that there are other ways to accomplish this. You can use ASP.NET MVC and a custom route, an HTTP Handler or separate ASPX page, or even a WCF REST service that’s configured to use non-JSON inputs. However if you want to use an ASP.NET AJAX service (or Page Methods) with a little bit of setup work it’s actually quite easy to capture all the form variables on the client and ship them up to the server. The basic steps needed to make this happen are: Capture form variables into an array on the client with jQuery’s .serializeArray() function Use $.ajax() or my ServiceProxy class to make an AJAX call to the server to send this array On the server create a custom type that matches the .serializeArray() name/value structure Create extension methods on NameValue[] to easily extract form variables Create a [WebMethod] that accepts this name/value type as an array (NameValue[]) This seems like a lot of work but realize that steps 3 and 4 are a one time setup step that can be reused in your entire site or multiple applications. Let’s look at a short example that looks like this as a base form of fields to ship to the server: The HTML for this form looks something like this: <div id="divMessage" class="errordisplay" style="display: none"> </div> <div> <div class="label">Name:</div> <div><asp:TextBox runat="server" ID="txtName" /></div> </div> <div> <div class="label">Company:</div> <div><asp:TextBox runat="server" ID="txtCompany"/></div> </div> <div> <div class="label" ></div> <div> <asp:DropDownList runat="server" ID="lstAttending"> <asp:ListItem Text="Attending" Value="Attending"/> <asp:ListItem Text="Not Attending" Value="NotAttending" /> <asp:ListItem Text="Maybe Attending" Value="MaybeAttending" /> <asp:ListItem Text="Not Sure Yet" Value="NotSureYet" /> </asp:DropDownList> </div> </div> <div> <div class="label">Special Needs:<br /> <small>(check all that apply)</small></div> <div> <asp:ListBox runat="server" ID="lstSpecialNeeds" SelectionMode="Multiple"> <asp:ListItem Text="Vegitarian" Value="Vegitarian" /> <asp:ListItem Text="Vegan" Value="Vegan" /> <asp:ListItem Text="Kosher" Value="Kosher" /> <asp:ListItem Text="Special Access" Value="SpecialAccess" /> <asp:ListItem Text="No Binder" Value="NoBinder" /> </asp:ListBox> </div> </div> <div> <div class="label"></div> <div> <asp:CheckBox ID="chkAdditionalGuests" Text="Additional Guests" runat="server" /> </div> </div> <hr /> <input type="button" id="btnSubmit" value="Send Registration" /> The form includes a few different kinds of form fields including a multi-selection listbox to demonstrate retrieving multiple values. Setting up the Server Side [WebMethod] The [WebMethod] on the server we’re going to call is going to be very simple and just capture the content of these values and echo then back as a formatted HTML string. Obviously this is overly simplistic but it serves to demonstrate the simple point of capturing the POST data on the server in an AJAX callback. public class PageMethodsService : System.Web.Services.WebService { [WebMethod] public string SendRegistration(NameValue[] formVars) { StringBuilder sb = new StringBuilder(); sb.AppendFormat("Thank you {0}, <br/><br/>", HttpUtility.HtmlEncode(formVars.Form("txtName"))); sb.AppendLine("You've entered the following: <hr/>"); foreach (NameValue nv in formVars) { // strip out ASP.NET form vars like _ViewState/_EventValidation if (!nv.name.StartsWith("__")) { if (nv.name.StartsWith("txt") || nv.name.StartsWith("lst") || nv.name.StartsWith("chk")) sb.Append(nv.name.Substring(3)); else sb.Append(nv.name); sb.AppendLine(": " + HttpUtility.HtmlEncode(nv.value) + "<br/>"); } } sb.AppendLine("<hr/>"); string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs == null) sb.AppendLine("No Special Needs"); else { sb.AppendLine("Special Needs: <br/>"); foreach (string need in needs) { sb.AppendLine("&nbsp;&nbsp;" + need + "<br/>"); } } return sb.ToString(); } } The key feature of this method is that it receives a custom type called NameValue[] which is an array of NameValue objects that map the structure that the jQuery .serializeArray() function generates. There are two custom types involved in this: The actual NameValue type and a NameValueExtensions class that defines a couple of extension methods for the NameValue[] array type to allow for single (.Form()) and multiple (.FormMultiple()) value retrieval by name. The NameValue class is as simple as this and simply maps the structure of the array elements of .serializeArray(): public class NameValue { public string name { get; set; } public string value { get; set; } } The extension method class defines the .Form() and .FormMultiple() methods to allow easy retrieval of form variables from the returned array: /// <summary> /// Simple NameValue class that maps name and value /// properties that can be used with jQuery's /// $.serializeArray() function and JSON requests /// </summary> public static class NameValueExtensionMethods { /// <summary> /// Retrieves a single form variable from the list of /// form variables stored /// </summary> /// <param name="formVars"></param> /// <param name="name">formvar to retrieve</param> /// <returns>value or string.Empty if not found</returns> public static string Form(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).FirstOrDefault(); if (matches != null) return matches.value; return string.Empty; } /// <summary> /// Retrieves multiple selection form variables from the list of /// form variables stored. /// </summary> /// <param name="formVars"></param> /// <param name="name">The name of the form var to retrieve</param> /// <returns>values as string[] or null if no match is found</returns> public static string[] FormMultiple(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).Select(nv => nv.value).ToArray(); if (matches.Length == 0) return null; return matches; } } Using these extension methods it’s easy to retrieve individual values from the array: string name = formVars.Form("txtName"); or multiple values: string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs != null) { // do something with matches } Using these functions in the SendRegistration method it’s easy to retrieve a few form variables directly (txtName and the multiple selections of lstSpecialNeeds) or to iterate over the whole list of values. Of course this is an overly simple example – in typical app you’d probably want to validate the input data and save it to the database and then return some sort of confirmation or possibly an updated data list back to the client. Since this is a full AJAX service callback realize that you don’t have to return simple string values – you can return any of the supported result types (which are most serializable types) including complex hierarchical objects and arrays that make sense to your client code. POSTing Form Variables from the Client to the AJAX Service To call the AJAX service method on the client is straight forward and requires only use of little native jQuery plus JSON serialization functionality. To start add jQuery and the json2.js library to your page: <script src="Scripts/jquery.min.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> json2.js can be found here (be sure to remove the first line from the file): http://www.json.org/json2.js It’s required to handle JSON serialization for those browsers that don’t support it natively. With those script references in the document let’s hookup the button click handler and call the service: $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); $.ajax({ url: "PageMethodsService.asmx/SendRegistration", type: "POST", contentType: "application/json", data: JSON.stringify({ formVars: arForm }), dataType: "json", success: function (result) { var jEl = $("#divMessage"); jEl.html(result.d).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, error: function (xhr, status) { alert("An error occurred: " + status); } }); } The key feature in this code is the $("#form1").serializeArray();  call which serializes all the form fields of form1 into an array. Each form var is represented as an object with a name/value property. This array is then serialized into JSON with: JSON.stringify({ formVars: arForm }) The format for the parameter list in AJAX service calls is an object with one property for each parameter of the method. In this case its a single parameter called formVars and we’re assigning the array of form variables to it. The URL to call on the server is the name of the Service (or ASPX Page for Page Methods) plus the name of the method to call. On return the success callback receives the result from the AJAX callback which in this case is the formatted string which is simply assigned to an element in the form and displayed. Remember the result type is whatever the method returns – it doesn’t have to be a string. Note that ASP.NET AJAX and WCF REST return JSON data as a wrapped object so the result has a ‘d’ property that holds the actual response: jEl.html(result.d).fadeIn(1000); Slightly simpler: Using ServiceProxy.js If you want things slightly cleaner you can use the ServiceProxy.js class I’ve mentioned here before. The ServiceProxy class handles a few things for calling ASP.NET and WCF services more cleanly: Automatic JSON encoding Automatic fix up of ‘d’ wrapper property Automatic Date conversion on the client Simplified error handling Reusable and abstracted To add the service proxy add: <script src="Scripts/ServiceProxy.js" type="text/javascript"></script> and then change the code to this slightly simpler version: <script type="text/javascript"> proxy = new ServiceProxy("PageMethodsService.asmx/"); $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); proxy.invoke("SendRegistration", { formVars: arForm }, function (result) { var jEl = $("#divMessage"); jEl.html(result).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, function (error) { alert(error.message); } ); } The code is not very different but it makes the call as simple as specifying the method to call, the parameters to pass and the actions to take on success and error. No more remembering which content type and data types to use and manually serializing to JSON. This code also removes the “d” property processing in the response and provides more consistent error handling in that the call always returns an error object regardless of a server error or a communication error unlike the native $.ajax() call. Either approach works and both are pretty easy. The ServiceProxy really pays off if you use lots of service calls and especially if you need to deal with date values returned from the server  on the client. Summary Making Web Service calls and getting POST data to the server is not always the best option – ASP.NET and WCF AJAX services are meant to work with data in objects. However, in some situations it’s simply easier to POST all the captured form data to the server instead of mapping all properties from the input fields to some sort of message object first. For this approach the above POST mechanism is useful as it puts the parsing of the data on the server and leaves the client code lean and mean. It’s even easy to build a custom model binder on the server that can map the array values to properties on an object generically with some relatively simple Reflection code and without having to manually map form vars to properties and do string conversions. Keep in mind though that other approaches also abound. ASP.NET MVC makes it pretty easy to create custom routes to data and the built in model binder makes it very easy to deal with inbound form POST data in its original urlencoded format. The West Wind West Wind Web Toolkit also includes functionality for AJAX callbacks using plain POST values. All that’s needed is a Method parameter to query/form value to specify the method to be called on the server. After that the content type is completely optional and up to the consumer. It’d be nice if the ASP.NET AJAX Service and WCF AJAX Services weren’t so tightly bound to the content type so that you could more easily create open access service endpoints that can take advantage of urlencoded data that is everywhere in existing pages. It would make it much easier to create basic REST endpoints without complicated service configuration. Ah one can dream! In the meantime I hope this article has given you some ideas on how you can transfer POST data from the client to the server using JSON – it might be useful in other scenarios beyond ASP.NET AJAX services as well. Additional Resources ServiceProxy.js A small JavaScript library that wraps $.ajax() to call ASP.NET AJAX and WCF AJAX Services. Includes date parsing extensions to the JSON object, a global dataFilter for processing dates on all jQuery JSON requests, provides cleanup for the .NET wrapped message format and handles errors in a consistent fashion. Making jQuery Calls to WCF/ASMX with a ServiceProxy Client More information on calling ASMX and WCF AJAX services with jQuery and some more background on ServiceProxy.js. Note the implementation has slightly changed since the article was written. ww.jquery.js The West Wind West Wind Web Toolkit also includes ServiceProxy.js in the West Wind jQuery extension library. This version is slightly different and includes embedded json encoding/decoding based on json2.js.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  AJAX  

    Read the article

  • SQLAuthority News – Download Whitepaper Using SharePoint List Data in PowerPivot

    - by pinaldave
    One of the many features of Microsoft SQL Server PowerPivot is the range of data sources that can be used to import data. Anything, from Microsoft SQL Server relational databases, Oracle databases, and Microsoft Access databases, to text documents, can be used as data sources in PowerPivot. In this paper, I explain one of the new and upcoming data sources that people are excited about – SharePoint list data in the form of Atom feeds. This white paper goes on to explain the different ways you can import SharePoint list data into PowerPivot, what types of lists are supported, various components that need to be installed to use this feature, and where to get those components. Download and read this whitepaper. Note: Abstract is taken from MSDN Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Visualising data a different way with Pivot collections

    - by Rob Farley
    Roger’s been doing a great job extending PivotViewer recently, and you can find the list of LobsterPot pivots at http://pivot.lobsterpot.com.au Many months back, the TED Talk that Gary Flake did about Pivot caught my imagination, and I did some research into it. At the time, most of what we did with Pivot was geared towards what we could do for clients, including making Pivot collections based on students at a school, and using it to browse PDF invoices by their various properties. We had actual commercial work based on Pivot collections back then, and it was all kinds of fun. Later, we made some collections for events that were happening, and even got featured in the TechEd Australia keynote. But I’m getting ahead of myself... let me explain the concept. A Pivot collection is an XML file (with .cxml extension) which lists Items, each linking to an image that’s stored in a Deep Zoom format (this means that it contains tiles like Bing Maps, so that the browser can request only the ones of interest according to the zoom level). This collection can be shown in a Silverlight application that uses the PivotViewer control, or in the Pivot Browser that’s available from getpivot.com. Filtering and sorting the items according to their facets (attributes, such as size, age, category, etc), the PivotViewer rearranges the way that these are shown in a very dynamic way. To quote Gary Flake, this lets us “see patterns which are otherwise hidden”. This browsing mechanism is very suited to a number of different methods, because it’s just that – browsing. It’s not searching, it’s more akin to window-shopping than doing an internet search. When we decided to put something together for the conferences such as TechEd Australia 2010 and the PASS Summit 2010, we did some screen-scraping to provide a different view of data that was already available online. Nick Hodge and Michael Kordahi from Microsoft liked the idea a lot, and after a bit of tweaking, we produced one that Michael used in the TechEd Australia keynote to show the variety of talks on offer. It’s interesting to see a pattern in this data: The Office track has the most sessions, but if the Interactive Sessions and Instructor-Led Labs are removed, it drops down to only the sixth most popular track, with Cloud Computing taking over. This is something which just isn’t obvious when you look an ordinary search tool. You get a much better feel for the data when moving around it like this. The more observant amongst you will have noticed some difference in the collection that Michael is demonstrating in the picture above with the screenshots I’ve shown. That’s because it’s been extended some more. At the SQLBits conference in the UK this year, I had some interesting discussions with the guys from Xpert360, particularly Phil Carter, who I’d met in 2009 at an earlier SQLBits conference. They had got around to producing a Pivot collection based on the SQLBits data, which we had been planning to do but ran out of time. We discussed some of ways that Pivot could be used, including the ways that my old friend Howard Dierking had extended it for the MSDN Magazine. I’m not suggesting I influenced Xpert360 at all, but they certainly inspired us with some of their posts on the matter So with LobsterPot guys David Gardiner and Roger Noble both having dabbled in Pivot collections (and Dave doing some for clients), I set Roger to work on extending it some more. He’s used various events and so on to be able to make an environment that allows us to do quick deployment of new collections, as well as showing the data in a grid view which behaves as if it were simply a third view of the data (the other two being the array of images and the ‘histogram’ view). I see PivotViewer as being a significant step in data visualisation – so much so that I feature it when I deliver talks on Spatial Data Visualisation methods. Any time when there is information that can be conveyed through an image, you have to ask yourself how best to show that image, and whether that image is the focal point. For Spatial data, the image is most often a map, and the map becomes the central mode for navigation. I show Pivot with postcode areas, since I can browse the postcodes based on their data, and many of the images are recognisable (to locals of South Australia). Naturally, the images could link through to the map itself, and so on, but generally people think of Spatial data in terms of navigating a map, which doesn’t always gel with the information you’re trying to extract. Roger’s even looking into ways to hook PivotViewer into the Bing Maps API, in a similar way to the Deep Earth project, displaying different levels of map detail according to how ‘zoomed in’ the images are. Some of the work that Dave did with one of the schools was generating the Deep Zoom tiles “on the fly”, based on images stored in a database, and Roger has produced a collection which uses images from flickr, that lets you move from one search term to another. Pulling the images down from flickr.com isn’t particularly ideal from a performance aspect, and flickr doesn’t store images in a small-enough format to really lend itself to this use, but you might agree that it’s an interesting concept which compares nicely to using Maps. I’m looking forward to future versions of the PivotViewer control, and hope they provide many more events that can be used, and even more hooks into it. Naturally, LobsterPot could help provide your business with a PivotViewer experience, but you can probably do a lot of it yourself too. There’s a thorough guide at getpivot.com, which is how we got into it. For some examples of what we’ve done, have a look at http://pivot.lobsterpot.com.au. I’d like to see PivotViewer really catch on a data visualisation tool.

    Read the article

  • Oracle Unbreakable Enterprise Kernel and Emulex HBA Eliminate Silent Data Corruption

    - by sergio.leunissen
    Yesterday, Emulex announced that it has added support for T10 Protection Information (T10-PI), formerly called T10-DIF, to a number of its HBAs. When used with Oracle's Unbreakable Enterprise Kernel, this will prevent silent data corruption and help ensure the integrity and regulatory compliance of user data as it is transferred from the application to the SAN From the press release: Traditionally, protecting the integrity of customers' data has been done with multiple discrete solutions, including Error Correcting Code (ECC) and Cyclic Redundancy Check (CRC), but there have been coverage gaps across the I/O path from the operating system to the storage. The implementation of the T10-PI standard via Emulex's BlockGuard feature, in conjunction with other industry player's implementations, ensures that data is validated as it moves through the data path, from the application, to the HBA, to storage, enabling seamless end-to-end integrity. Read the white paper and don't miss the live webcast on eliminating silent data corruption on December 16th!

    Read the article

  • Storing large data in HTTP Session (Java Application)

    - by Umesh Awasthi
    I am asking this question in continuation with http-session-or-database-approach. I am planning to follow this approach. When user add product to cart, create a Cart Model, add items to cart and save to DB. Convert Cart model to cart data and save it to HTTP session. Any update/ edit update underlying cart in DB and update data snap shot in Session. When user click on view cart page, just pick cart data from Session and display to customer. I have following queries regarding HTTP Session How good is it to store large data (Shopping Cart) in Session? How scalable this approach can be ? (With respect to Session) Won't my application going to eat and demand a lot of memory? Is my approach is fine or do i need to consider other points while designing this? Though, we can control what all cart data should be stored in the Session, but still we need to have certain information in cart data being stored in session?

    Read the article

  • Google I/O 2012 - OAuth 2.0 for Identity and Data Access

    Google I/O 2012 - OAuth 2.0 for Identity and Data Access Ryan Boyd Users like to keep their data in one place on the web where it's easily accessible. Whether it's YouTube videos, Google Drive files, Google contacts or one of many other types of data, users need a way to securely grant applications access to their data. OAuth is the key web standard for delegated data access and OAuth 2.0 is the next-generation version with additional security features. This session will cover the latest advances in how OAuth can be used for data access, but will also dive into how you can lower the barrier to entry for your application by allowing users to login using their Google accounts. You will learn, through an example written in Python, how to use OAuth 2.0 to incorporate user identity into your web application. Best practices for desktop applications, mobile applications and server-to-server use cases will also be discussed. From: GoogleDevelopers Views: 11 1 ratings Time: 58:56 More in Science & Technology

    Read the article

  • Tellago Devlabs: A RESTful API for BizTalk Server Business Rules

    - by gsusx
    Tellago DevLabs keeps growing as the primary example of our commitment to open source! Today, we are very happy to announce the availability of the BizTalk Business Rules Data Service API which extends our existing BizTalk Data Services solution with an OData API for the BizTalk Server Business Rules engine. Tellago’s Vishal Mody led the implementation of this version of the API with some input from other members of our technical staff. The motivation The fundamental motivation behind the BRE Data...(read more)

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Logical and Physical Modeling for Analytical Applications

    - by Dejan Sarka
    I am proud to announce that my first course for Pluralsight is released. The course title is Logical and Physical Modeling for Analytical Applications. Here is the description of the course. A bad data model leads to an application that does not perform well. Therefore, when developing an application, you should create a good data model from the start. However, even the best logical model can’t help when the physical implementation is bad. It is also important to know how SQL Server stores and accesses data, and how to optimize the data access. Database optimization starts by splitting transactional and analytical applications. In this course, you learn how to support analytical applications with logical design, get understanding of the problems with data access for queries that deal with large amounts of data, and learn about SQL Server optimizations that help solving these problems. Enjoy the course!

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >