Search Results

Search found 25180 results on 1008 pages for 'post processing'.

Page 388/1008 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • JSR updates - November 2013

    - by Heather VanCura
     This week has been a busy week for JCP participants! Ten JSRs related to the upcoming Java Standard Edition (Java SE) 8 release posted public reviews--four Public Reviews and six Maintenance Reviews.  All JSRs are operating under the latest version of the JCP program and have public feedback mechanisms and issue trackers.  Please review and comment on these JSRs--your input and participation is wanted and needed!  JSR 308, Annotations on Java Types, published a Public Review. This review closes 4 December. JSR 310, Date and Time API, published a Public Review. This review closes 4 December. JSR 335, Lambda Expressions for the Java Programming Language, published a Public Review. This review closes 4 December. JSR 337, Java SE 8 Release contents, published a Public Review.  This review closes 4 December. JSR 221, JDBC 4.0 API, published a Maintenance Review.  This review closes 4 December. JSR 199, Java Compiler API, published a Maintenance Review.  This review closes 4 December. JSR 160, Java Management Extensions Remote API, published a Maintenance Review.  This review closes 4 December. JSR 114, JDBC Rowset Implementations, published a Maintenance Review.  This review closes 4 December. JSR 3, Java Management Extensions Specification, published a Maintenance Review.  This review closes 4 December. JSR 206, Java API for XML Processing,  published a Maintenance Review.  This review closes 22 November. Two other JSRs also published recent updates:  JSR 354, Money and Currency API, published a Public Review.  This review closes 23 November.  JSR 107, JCACHE - Java Temporary Caching API, published a Proposed Final Draft.

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • Big Data – Basics of Big Data Architecture – Day 4 of 21

    - by Pinal Dave
    In yesterday’s blog post we understood how Big Data evolution happened. Today we will understand basics of the Big Data Architecture. Big Data Cycle Just like every other database related applications, bit data project have its development cycle. Though three Vs (link) for sure plays an important role in deciding the architecture of the Big Data projects. Just like every other project Big Data project also goes to similar phases of the data capturing, transforming, integrating, analyzing and building actionable reporting on the top of  the data. While the process looks almost same but due to the nature of the data the architecture is often totally different. Here are few of the question which everyone should ask before going ahead with Big Data architecture. Questions to Ask How big is your total database? What is your requirement of the reporting in terms of time – real time, semi real time or at frequent interval? How important is the data availability and what is the plan for disaster recovery? What are the plans for network and physical security of the data? What platform will be the driving force behind data and what are different service level agreements for the infrastructure? This are just basic questions but based on your application and business need you should come up with the custom list of the question to ask. As I mentioned earlier this question may look quite simple but the answer will not be simple. When we are talking about Big Data implementation there are many other important aspects which we have to consider when we decide to go for the architecture. Building Blocks of Big Data Architecture It is absolutely impossible to discuss and nail down the most optimal architecture for any Big Data Solution in a single blog post, however, we can discuss the basic building blocks of big data architecture. Here is the image which I have built to explain how the building blocks of the Big Data architecture works. Above image gives good overview of how in Big Data Architecture various components are associated with each other. In Big Data various different data sources are part of the architecture hence extract, transform and integration are one of the most essential layers of the architecture. Most of the data is stored in relational as well as non relational data marts and data warehousing solutions. As per the business need various data are processed as well converted to proper reports and visualizations for end users. Just like software the hardware is almost the most important part of the Big Data Architecture. In the big data architecture hardware infrastructure is extremely important and failure over instances as well as redundant physical infrastructure is usually implemented. NoSQL in Data Management NoSQL is a very famous buzz word and it really means Not Relational SQL or Not Only SQL. This is because in Big Data Architecture the data is in any format. It can be unstructured, relational or in any other format or from any other data source. To bring all the data together relational technology is not enough, hence new tools, architecture and other algorithms are invented which takes care of all the kind of data. This is collectively called NoSQL. Tomorrow Next four days we will answer the Buzz Words – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQLAuthority News – Scaling Up Your Data Warehouse with SQL Server 2008 R2

    - by pinaldave
    Data Warehouses are suppose to be containing huge amount of the data from the beginning. However, there are cases when too big is not enough. Every Data Warehouse Admin will agree that they have faced situation where they will need to scale up their data warehouse. Microsoft has released white paper discussing the same. Here is the abstract from the Microsoft Official site: SQL Server 2008 introduced many new functional and performance improvements for data warehousing, and SQL Server 2008 R2 includes all these and more. This paper discusses how to use SQL Server 2008 R2 to get great performance as your data warehouse scales up. We present lessons learned during extensive internal data warehouse testing on a 64-core HP Integrity Superdome during the development of the SQL Server 2008 release, and via production experience with large-scale SQL Server customers. Our testing indicates that many customers can expect their performance to nearly double on the same hardware they are currently using, merely by upgrading to SQL Server 2008 R2 from SQL Server 2005 or earlier, and compressing their fact tables. We cover techniques to improve manageability and performance at high-scale, encompassing data loading (extract, transform, load), query processing, partitioning, index maintenance, indexed view (aggregate) management, and backup and restore. Scaling Up Your Data Warehouse with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Connecting the Dots (.NET Business Connector)

    - by ssmantha
    Recently, one of my colleagues was experimenting with Reporting Server on DAX 2009, whenever he used to view a report in SQL Server Reporting Manager he was welcomed with an error: “Error during processing Ax_CompanyName report parameter. (rsReportParameterProcessingError)” The Event Log had the following entry: Dynamics Adapter LogonAs failed. Microsoft.Dynamics.Framework.BusinessConnector.Session.Exceptions.FatalSessionException at Microsoft.Dynamics.Framework.BusinessConnector.Session.DynamicsSession.HandleException(Stringmessage, Exception exception, HandleExceptionCallback callback) We later found out that this was due to incorrect Business Connector account, with my past experience I noticed this as a very common mistake people make during EP and Reporting Installations. Remember that the reports need to connect to the Dynamics Ax server to run the AxQueries., which needs to pass through the .NET Business Connector. To ensure everything works fine please note the following settings: 1) Your Report Server Service Account should be same as .NET Business Connector proxy account. 2) Ensure on the server which has Reporting Services installed, the client configuration utility for Business Connector points to correct proxy account. 3) And finally, the AX instance you are connecting to has Service account specified for .NET business connector. (administration –> Service accounts –> .NET Business Connector) These simple checkpoints can help in almost most of the Business Connector related  errors, which I believe is mostly due to incorrect configuration settings. Happy DAXing!!

    Read the article

  • Oracle Fusion Middleware Innovation Awards 2012 submissions - Only 2 weeks to go

    - by Lionel Dubreuil
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} You have less than 2 weeks left (July 17th) to submit Fusion Middleware Innovation Award nominations. As a reminder, these awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Please visit oracle.com/corporate/awards/middleware for more details and nomination forms. Our “Service Integration (SOA) and BPM” category covers Oracle SOA Suite, Oracle BPM Suite, Oracle Event Processing, Oracle Service Bus, Oracle B2B Integration, Oracle Application Integration Architecture (AIA), Oracle Enterprise Repository... To submit your nomination, the process is very simple: Download the Service Integration (SOA) and BPM Form Complete this form with as much detail as possible. Submit completed form and any relevant supporting documents to: [email protected] Email subject category “Service Integration (SOA) and BPM” when submitting your nomination.

    Read the article

  • ArchBeat Top 10 for December 2-8, 2012

    - by Bob Rhubart
    The Top 10 most-clicked items shared on the OTN ArchBeat Facebook page for the week of December 2-8, 2012 Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics Another of the four posts published on Dec 4 by the Fusion Middleware A-Team blogger identified as "fip" illlustrates "how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication." Web Service Example - Part 3: Asynchronous Part 3 in this series from the Oracle ADF Mobile blog looks at "firing the web service asynchronously and then filling in the UI when it completes." Denis says, "This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data." Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations Oracle SOA & BPM Partner Community blogger Juergen Kress shares a list of 13 SOA presentations delivered or moderated by Oracle SOA Product Management at OOW12 in San Francisco. Oracle WebLogic Server WLS Domain Browser My colleague Jeff Davies, a frequent speaker at OTN Architect Day events and a genuinely nice guy, emailed me last night with this message: "I just came across this app on Google Play. It allows WebLogic administrators to browse WLS 12c domain information. I installed it on my phone and tried it out. Works very fast." I'm an iPhone guy, but I'm perfectly comfortable taking Jeff at his word. The app is called WLS Domain Browser. Follow the link for more info from the Google Play site. Retrieve Performance Data from SOA Infrastructure Database Another of the four blog posts published on Dec 4 by very busy Oracle Fusion Middleware A-Team member "fip," this one offers "examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time." How to Achieve OC4J RMI Load Balancing "Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusion in the field [about] how OC4J RMI load balancing works," says the Oracle Fusion Middleware A-Team member known only as "fip." "Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public." From XaaS to Java EE – Which damn cloud is right for me in 2012? Oracle ACE Director Markus Eisele wrestles with a timely technical issue and shares his observations on several of the alternatives. Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox "One of the main advantages of this is that Templates can be created away from the Exalogic Environment," explains The Old Toxophilist. (BTW: I had to look it up: a toxophilist is one who collects bows and arrows.) ADF Mobile - Implementing Reusable Mobile Architecture "Reusability was always a strong part of ADF," says Oracle ACE Director Andrejus Baranovskis. "The same high reusability level is supported now in ADF Mobile." The objective of this post is "to prove technically that [the] reusable architecture concept works for ADF Mobile." Using BPEL Performance Statistics to Diagnose Performance Bottlenecks Someone had a busy day… This post, one of four published on DeC 4 by a member of the Oracle Fusion Middleware A-Team identified only as "fip," offers details on how to "enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience." Thought for the Day "If you're afraid to change something it is clearly poorly designed." — Martin Fowler Source: SoftwareQuotes.com

    Read the article

  • Big Data – Basics of Big Data Analytics – Day 18 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the various components in Big Data Story. In this article we will understand what are the various analytics tasks we try to achieve with the Big Data and the list of the important tools in Big Data Story. When you have plenty of the data around you what is the first thing which comes to your mind? “What do all these data means?” Exactly – the same thought comes to my mind as well. I always wanted to know what all the data means and what meaningful information I can receive out of it. Most of the Big Data projects are built to retrieve various intelligence all this data contains within it. Let us take example of Facebook. When I look at my friends list of Facebook, I always want to ask many questions such as - On which date my maximum friends have a birthday? What is the most favorite film of my most of the friends so I can talk about it and engage them? What is the most liked placed to travel my friends? Which is the most disliked cousin for my friends in India and USA so when they travel, I do not take them there. There are many more questions I can think of. This illustrates that how important it is to have analysis of Big Data. Here are few of the kind of analysis listed which you can use with Big Data. Slicing and Dicing: This means breaking down your data into smaller set and understanding them one set at a time. This also helps to present various information in a variety of different user digestible ways. For example if you have data related to movies, you can use different slide and dice data in various formats like actors, movie length etc. Real Time Monitoring: This is very crucial in social media when there are any events happening and you wanted to measure the impact at the time when the event is happening. For example, if you are using twitter when there is a football match, you can watch what fans are talking about football match on twitter when the event is happening. Anomaly Predication and Modeling: If the business is running normal it is alright but if there are signs of trouble, everyone wants to know them early on the hand. Big Data analysis of various patterns can be very much helpful to predict future. Though it may not be always accurate but certain hints and signals can be very helpful. For example, lots of data can help conclude that if there is lots of rain it can increase the sell of umbrella. Text and Unstructured Data Analysis: unstructured data are now getting norm in the new world and they are a big part of the Big Data revolution. It is very important that we Extract, Transform and Load the unstructured data and make meaningful data out of it. For example, analysis of lots of images, one can predict that people like to use certain colors in certain months in their cloths. Big Data Analytics Solutions There are many different Big Data Analystics Solutions out in the market. It is impossible to list all of them so I will list a few of them over here. Tableau – This has to be one of the most popular visualization tools out in the big data market. SAS – A high performance analytics and infrastructure company IBM and Oracle – They have a range of tools for Big Data Analysis Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Data Scientist. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    - by Marco Russo (SQLBI)
    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented). This white paper has the following structure: Requirements (data model, capacity planning, client tool) Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular) Data Model optimizations (memory compression, query performance, scalability) Partitioning and Processing strategy for near real-time latency Hardware selection (NUMA analysis, Azure VM tests) Scalability tests (estimation of maximum users per node) If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true: […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house. At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this. Owen Graupman, inContact I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

    Read the article

  • Problem with WCF-SQL Adapter

    - by Paul Petrov
    When using WCF receive adapter with SQL binding in Polling mode please be aware of the following problem. Problem: At some regular but seemingly random intervals the application stops processing new requests, places a lock on the database and prevent other application from accessing it. Initially it looked like DTC issue, as it was distributed transaction that stalled most of the time. Symptoms: Orchestration instances in Dehydrated state, receive location not picking up new messages, exclusive locks on database tables, errors in DTC trace. Cause: Microsoft has confirmed that there is a bug in the WCF-SQL adapter. In the receive adapter binding configuration there's receiveTimeout property set to 10 minutes by default. If during this period data is not found in the table the adapter would start new thread and allocate more memory without releasing old resources. Thus if there's no new data in the table for a long time a new thread will be created in the host instance every 10 minutes until it reaches threshold (1000) and then there's no threads left for this host instance and it can't start/complete any tasks. Then this host instance won't be able to do anything. If other artifacts are hosted in the instance they will suffer consequences as well. Solution: - Set receiveTimeout to the maximum time 24.20:31:23.6470000. - Place WCF-SQL receive locations in separate host to provide its own thread pool and eliminate impact on other processes - Ensure WCF-SQL dedicated host instances are restarted at interval less or equal to receiveTimeout to flush threads and memory - Monitor performance counters Process/Thread Count/BTSNTSvc{n} for thread count trend and respond to alert if it grows by restarting host instance If you use WCF-SQL Adapter in the Notification mode then make sure to remove sqlAdapterInboundTransactionBehavior otherwise this location will exhibit the same issue. In this case though, setting receiveTimeout doesn't help and new thread will be created at default intervals (10 min) ignoring maximum setting.

    Read the article

  • Customize SharePoint list using InfoPath2010 form Part4

    - by ybbest
    Customize SharePoint list using InfoPath2010 form Part1 Customize SharePoint list using InfoPath2010 form Part2 Customize SharePoint list using InfoPath2010 form Part3 In this post, I’d like to show you how to create print functionality in InfoPath for SharePoint list. The print functionality is provided out of box in InfoPath form library; however it is not available in SharePoint list. Here are the steps to create the print functionality.You can download the new form here. 1. Create print page in the list by first copy and paste the displayifs.aspx and rename the file to Printifs.aspx. 2. Open the page in the SharePoint designer and copy the following javascript to the PlaceHolderTitleAreaClass ContentPlaceHolder. <script type="text/javascript"> $(document).ready(function(){ $("[id^='Ribbon']").hide(); $(".s4-title").hide(); $("[id='s4-leftpanel']").hide(); $("[id='s4-ribbonrow']").hide(); $("[id='s4-titlerow']").hide(); $("[id='s4-titlerow']").css("height", "0px"); $("body").css("background-color", "white"); $("body").css("zoom", "135%"); $("[id='MSO_ContentTable']").css("margin-left", "0px"); $("[id='MT-BodyContent']").css("width", "900px"); $(".MT-BodyArea").css("width", "900px"); $("[id='MT-Layout']").css("width", "900px"); $(".ms-bodyareacell").css("width", "900px"); $(".s4-wpTopTable").css("border", "none"); $("[id$='XmlFormView']").css("margin-left", "-80px"); $("body").css("margin-top", "-30px"); $(":contains('CAPEX')").css("border", "5px solid #FFCC00"); window.print(); }); </script> 3. Open InfoPath form for the list and create a field called PrintLink 4. Set the default value of printLink that points to the print page I just created before with the query string id.You can download the formula for the default value here. 5. Add a new image that looks like Print button on the display view, then I can set the url to the Print link Field. (The reason I did not use button is that you cannot set the navigate url for the button). 6.Set the url of the image to the PrintLInk field. 7.Next , create the print view. 8. Copy the contents from the display view to print view 9. Finally, go to the printifs.aspx and edit the InfoPath web part to set the view to PrintView. 9. Republish you form you will see the form as shown below 10. If you click the Print button, you will see the print page and print dialog,you can also add the company logo in the print page using css as well. 11.To deploy the customization,you can use the backup and restore content database approach , you can get more details from my previous blog post here.

    Read the article

  • BizTalk Pipeline Component Error: "Object reference not set to an instance of an object"

    - by Stuart Brierley
    Yesterday I posted about my BizTalk Archiving Pipeline Component, which can be found on Codeplex if anyone is interested in taking a look. During testing of this component I began to encounter an error whereby the component would throw an "Object reference not set to an instance of an object" error when processing as a part of a Custom Pipeline. This was occurring when the component was reading a ReadOnlySeekableStream so that the data can be archived to file, but the actual code throwing the error was somewhere in the depths of the Microsoft.BizTalk.Streaming stack. It turns out that there is a known issue where this exception can be thrown because the garbage collector has disposed of of the stream before execution of the custom pipeline has completed. To get around this you need to add the streams in your code to the pipeline context resource tracker.   So a block of my code goes from:                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } to                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         pc.ResourceTracker.AddResource(originalStrm);                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } So far this seems to have solved the issue, the error is no more, and my archive component is continuing its way through testing.

    Read the article

  • SQL SERVER – Performance Tuning Resolution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by MidnightDBAs. Taking resolutions is such an interesting subject. I think just like records, these are broken way more often. I find this is the funniest thing as we all take resolutions every year but not every year, we can manage to keep them. Well, does it mean we should not take resolutions? In fact I support resolutions. Every year, I take a resolution that I will strive reduce my body weight and I usually manage to keep eating healthy till the end of January. When February begins, I begin to loose focus from my goal and as March starts, the “As usual” eating habits begin. Looking at the positive side, what would happen if every year I do not eat healthy in January, I think that might cause terrible consequences to my health in the long run. So keeping resolutions is a good practise and following them to the extent one can is commendable. Let us come back to the world of SQL Server. What is my resolution for year 2011 for SQL Server? There are many, I am going to list three of very important resolutions that I have taken this new year over here. To understand SQL Server Performance Tuning at a deeper Level I think I am already half way through. I have been being very much busy during any given month doing hands-on performance tuning for at least 12 days on an average. That means, I am doing this activity for almost doing 2 weeks a month. I believe that I have a good understanding of the subject. Note that the word that I have used is “good,” and not “best.” There are often cases when I am stumped, and I have no clue of what to do next. Then, I usually go for my “trial and error” method - whichever method works, I make sure to keep a note on my blog. My goal is that I should never ever go for the trial and error method again to achieve the same solution. I should know the solution right away when I see the problem. I do understand that Performance Tuning can be a strange animal at times and one cannot guess the right step every time. However, aiming a high goal never hurts and I am going to learn more and more in this focused area. Going further from Basic BI understanding I do fairly decent with BI concepts. I know the nbasics of SSIS, SSRS, SSAS, PowerPivot and SharePoint (and few other things MDS, StreamInsight, etc). However, I still consider myself as a beginner. I do not have hands-on experience like many other BI Gurus around. I think I want to take my learning further in this direction. I do not want to be a BI expert as the first step but the goal is to move ahead from basic level towards an advanced level. I am going to start presenting in User Group Sessions and other places on this subject. When I have to prepare new subject for presentations, I think I force myself to learn more. I am committed to learn a bit more in this direction. Learning new features SQL Server 2011 Denali This is new thing from “Microsoft” for all the SQL Geeks. I am eagerly waiting for final product later this year and I am planning to learn it well. I think if I follow my above two goals, I think this goal will be automatically covered. I am eager and excited for this new offering from Microsoft. I guess, these are my resolutions; may be next year about the same time, I must revisit this post and see how much successful I am in following my goal. On a lighter note, I am particularly fan of following cartoon strip (Courtesy: Calvin and Hobbes). I think when we cannot resolve our resolutions, we tend to act like Calvin. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ATG Live Webcast June 14: Technical Preview of EBS 12.2 Online Patching

    - by BillSawyer
    Online Patching is is one of the cornerstone new features in our upcoming Oracle E-Business Suite 12.2 release. This ground-breaking feature is based upon Edition-Based Redefinition, a new 11gR2 Database feature that was built to Oracle Applications division specifications to allow the E-Business Suite's database tier to be patched while the environment is running.  Online Patching combines the use of Edition-Based Redefinition and new E-Business Suite technologies to allow patching to the E-Business Suite's database and application tier servers while the environment is being actively used by its end-users. This webcast provides a detailed technical preview of: How this new feature works How it affects E-Business Suite end-users How it affects E-Business Suite database administrators and patching lifecycles How it affects developers and third-party software vendors responsible for E-Business Suite customizations and extensions The presenter for this event is Kevin Hudson, Senior Director and one of the Online Patching architects. There will be a special extended Q&A Session at the end of this presentation, given the nature of the materials and the questions that we expect from you. ATG Development staff supporting the Q&A session will include Elke Phelps, Santiago Bastidas, Max Arderius, and other ATG architects. Date:               Thursday, June 14, 2012Time:              8:00 AM - 10:00 AM Pacific Standard Time (Special 2-hour Time)Presenter:    Kevin Hudson, Senior Director, Applications Technology IntegrationWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128   International Participant Dial-In Number:      706-634-9568   Dial-In Passcode:                                              100815To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  597470987If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. When will Oracle E-Business Suite 12.2 be released? Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog. We'll post updates here as soon as soon as they're available.    

    Read the article

  • Industry perspectives on managing content

    - by aahluwalia
    Earlier this week I was noodling over a topic for my first blog post. My intention for this blog is to bring a practitioner's perspective on ECM to the community; to share and collaborate on best practices and approaches that address today's business problems. Reviewing my past 14 years of experience with web technologies, I wondered what topic would serve as a good "conversation starter". During this time, I received a call from a friend who was seeking insights on how content management applies to specific industries. She approached me because she vaguely remembered that I had worked in the Health Insurance industry in the recent past. She wanted me to tell her about the specific business needs of this industry. She was in for quite a surprise as she found out that I had spent the better part of a decade managing content within the Health Insurance industry and I discovered a great topic for my first blog post! I offer some insights from Health Insurance and invite my fellow practitioners to share their insights from other industries. What does content management mean to these industries? What can solution providers be aware of when offering solutions to these industries? The United States health care system relies heavily on private health insurance, which is the primary source of coverage for approximately 58% Americans. In the late 19th century, "accident insurance" began to be available, which operated much like modern disability insurance. In the late 20th century, traditional disability insurance evolved into modern health insurance programs. The first thing a solution provider must be aware of about the Health Insurance industry is that it tends to be transaction intensive. They are the ones who manage and administer our health plans and process our claims when we visit our health care providers. It helps to keep in mind that they are in the business of delivering health insurance and not technology. You may find the mindset conservative in comparison to the IT industry, however, the Health Insurance industry has benefited and will continue to benefit from the efficiency that technology brings to traditionally paper-driven processes. We are all aware of the impact that Healthcare reform bill has had a significant impact on the Health Insurance industry. They are under a great deal of pressure to explore ways to reduce their administrative costs and increase operational efficiency. Overall, administrative costs of health insurance include the insurer's cost to administer the health plan, the costs borne by employers, health-care providers, governments and individual consumers. Inefficiencies plague health insurance, owing largely to the absence of standardized processes across the industry. To achieve this, industry leaders have come together to establish standards and invest in initiatives to help their healthcare provider partners transition to the next generation of healthcare technology. The move to online services and paperless explanation of benefits are some manifestations of technological advancements in health insurance. Several companies have adopted Toyota's LEAN methodology or Six Sigma principles to improve quality, reduce waste and excessive costs, thereby increasing the value of their plan offerings. A growing number of health insurance companies have transformed their business systems in the past decade alone and adopted some form of content management to reduce the costs involved in administering health plans. The key strategy has been to convert paper documents and forms into electronic formats, automate the content development process and securely distribute content to various audiences via diverse marketing channels, including web and mobile. Enterprise content management solutions can enable document capture of claim forms, manage digital assets, integrate with Enterprise Resource Planning (ERP) and Human Capital Management (HCM) solutions, build Business Process Management (BPM) processes, define retention and disposition instructions to comply with state and federal regulations and allow eBusiness and Marketing departments to develop and deliver web content to multiple websites, mobile devices and portals. Content can be shared securely within and outside the organization using Information Rights Management.  At the end of the day, solution providers who can translate strategic goals into solutions that maximize process automation, increase ease of use and minimize IT overhead are likely to be successful in today's health insurance environment.

    Read the article

  • Three Key Tenets of Optimal Social Collaboration

    - by kellsey.ruppel
    Today's blog post comes to us from John Bruswick! This post is an abridged version of John’s white paper in which he discusses three principals to optimize social collaboration within an enterprise.   By [email protected], Oracle Principal Sales Consultant Effective social collaboration is actionable, deeply contextual and inherently derives its value from business entities outside of itself. How does an organization begin the journey from traditional, siloed collaboration to natural, business entity based social collaboration? Successful enablement of enterprise social collaboration requires that organizations embrace the following tenets and understand that traditional collaborative functionality has inherent limits - it is innovation and integration in accordance with the following tenets that will provide net-new efficiency benefits. Key Tenets of Optimal Social Collaboration Leverage a Ubiquitous Social Fabric - Collaborative activities should be supported through a ubiquitous social fabric, providing a personalized experience, broadcasting key business events and connecting people and business processes.  This supports education of participants working in and around a specific business entity that will benefit from an implicit capture of tacit knowledge and provide continuity between participants.  In the absence of this ubiquitous platform activities can still occur but are essentially siloed causing frequent duplication of effort across similar tasks, with critical tacit knowledge eluding capture. Supply Continuous Context to Support Decision Making and Problem Solving - People generally engage in collaborative behavior to obtain a decision or the resolution for a specific issue.  The time to achieve resolution is referred to as "Solve Time".  Users have traditionally been forced to switch or "alt-tab" between business systems and synthesize their own context across disparate systems and processes.  The constant loss of context forces end users to exert a large amount of effort that could be spent on higher value problem solving. Extend the Collaborative Lifecycle into Back Office - Beyond the solve time from decision making efforts, additional time is expended formalizing the resolution that was generated from collaboration in a system of record.  Extending collaboration to result in the capture of an explicit decision maximizes efficiencies, creating a closed circuit for a particular thread.  This type of structured action may exist today within your organization's customer support system around opening, solving and closing support issues, but generally does not extend to Sales focused collaborative activities. Excelling in the Unstructured Future We will always have to deal with unstructured collaborative processes within our organizations.  Regardless of the participants and nature of the collaborate process, two things are certain – the origination and end points are generally known and relate to a business entity, perhaps a customer, opportunity, order, shipping location, product or otherwise. Imagine the benefits if an organization's key business systems supported a social fabric, provided continuous context and extended the lifecycle around the collaborative decision making to include output into back office systems of record.   The technical hurdle to embracing optimal social collaboration would fall away, leaving the company with an opportunity to focus on and refine how processes were approached.  Time and resources previously required could then be reallocated to focusing on innovation to support competitive differentiation unique to your business. How can you achieve optimal social collaboration? Oracle Social Network enables business users to collaborate with each other using a broad range of collaboration styles and integrates data from a variety of sources and business applications -- allowing you to achieve optimal social collaboration. Looking to learn more? Read John's white paper, where he discusses in further detail the three principals to optimize social collaboration within an enterprise. 

    Read the article

  • Alternatives to Professional Version Control

    - by greengit
    We're teaming up with some non programmers (writers) who need to contribute to one of our projects. Now they just don't like the idea of using Git (or anything for that matter) for version controlling their work. I think this is because they just don't find it worthwhile to wrap their heads around the twisted concepts of version control. (when I first introduced them to branching and merging -- they looked like I was offending them.) Now, we're not in a position to educate them or convince them to use it. We're just trying to find alternatives so that we get all their work versioned (which is what we need) -- and they get easy workflow and concentrate on what they do. I have come up with some ideas... tell them to save their work as a separate file every time they make some non-trivial change, and then use a diff on our side to just track changes. write a program (in Python) that implements the "milestones" in CSSEdit in some way. About the project: It is a natural language processing system (written in C + Python). We've hired some writers to prepare inputs for the system in different languages. And as we evolve the software, we'd need those writers to make changes to their inputs (articles). Sometimes the changes are very small (a word or two), and other times big. The reason we need to version control those changes is because every small/big change in the input has the potential to change the system's output dramatically.

    Read the article

  • Drawing particles with CPU instead of GPU (XNA)

    - by Helix
    I'm trying out modifications to the following particle system. http://create.msdn.com/en-US/education/catalog/sample/particle_3d I have a function such that when I press Space, all the particles have their positions and velocities set to 0. for (int i = 0; i < particles.GetLength(0); i++) { particles[i].Position = Vector3.Zero; particles[i].Velocity = Vector3.Zero; } However, when I press space, the particles are still moving. If I go to FireParticleSystem.cs I can turn settings.Gravity to 0 and the particles stop moving, but the particles are still not being shifted to (0,0,0). As I understand it, the problem lies in the fact that the GPU is processing all the particle positions, and it's calculating where the particles should be based on their initial position, their initial velocity and multiplying by their age. Therefore, all I've been able to do is change the initial position and velocity of particles, but I'm unable to do it on the fly since the GPU is handling everything. I want the CPU to calculate the positions of the particles individually. This is because I will be later implementing some sort of wind to push the particles around. How do I stop the GPU from taking over? I think it's something to do with VertexBuffers and the draw function, but I don't know how to modify it to make it work.

    Read the article

  • Thoughts on Thoughts on TDD

    Brian Harry wrote a post entitled Thoughts on TDD that I thought I was going to let lie, but I find that I need to write a response. I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion. Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my Seven Deadly Sins of Programming series. Now, on to what he doesnt like. He says that he finds it inefficient to have tests that he has to change every time he refactors. Here is where we part company. If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you dont need to test (internal details rather than external implementation), your code perhaps isnt as decoupled as it could be, or maybe you need a visit to refactorers anonymous. I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky. *Unless* we have a set of tests that have great coverage. And TDD (or Example-based Design, which I prefer as a term) gives those to us. Now, I dont know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers Ive worked with and I count myself in that bucket the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD. For me, it all comes down to the answer to the following question: How do you ensure that your code works now and will continue to work in the future? Im willing to put up with a little efficiency on the front side to get that benefit later. Its not the writing of the code thats the expensive part, its everything else that comes after. I dont think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you dont help the guy in the future who doesnt know what conditions were important if he has to change your code. His second part that he doesnt like backing into an architecture (go read to see what he means). Ive certainly had to work with code that was like this before, and its a nightmare the code that nobody wants to touch. But thats not at all the kind of code that you get with TDD, because if youre doing it right youre doing the write a failing tests, make it pass, refactor approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor. I also think Brian is missing an important point. We arent all as smart as he is. Im reminded a bit of the lesson of Intentional Programming, Charles Simonyis paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer. In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you dont have somebody of Brians talents to help you out. And thats a good thing.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How Do I Print Photos?

    - by Takkat
    Other than for Windows in Ubuntu there are no fancy utilities provided from printer manufacturers to print photos. I am aware of Gnome Photo Printer and of Photoprint, the first being easy to handle, the latter having more options. However I wonder if there are any other or maybe even better alternatives (including plugins) to perform the following tasks: Print photos in the best photo-resolution the driver offers Adjust paper size for standard values of photo papers Choose paper tray if the printer has more than one Print out multiple photos on one page including mixed sizes (grids) Multiple prints with same settings Borderless printing if the printer is capable of this Any additional options like pre-processing for color correction or noise reduction would be nice to have but are not so essential. Update According to this spec it seems not to so easy to accomplish the simple task of printing photos. Indeed all applications I have gone through have major drawbacks that make printing photos almost impossible. Below I will list what put me off using them for photo printing: Gnome Photo Printer: no thumbnails, no grids Photoprint: does not keep settings, GUI broken, no standard photo size, no thumbs Eye Of Gnome: no multiple pages, no grids Gimp + Images Grid Layout: far too many steps to finally find that prints are always different to their previews. F-Spot: no grids Picasa 3: no grids, very few fixed paper sizes, 300 dpi only flPhoto: strange GUI, no thumbs, no printer settings, did not print at all Windows: Ooops - everything works fine! But I want Ubuntu to do this! After half a pack of ink cartridges and half a pack of photo paper cards I am getting tired of testing. At least Gimp and Picasa looked promising but both don't keep their promise when it comes to printing. I'd already be happy to quickly print a few photos with EOG if bug #80220 was fixed - but it's still on "wishlist".

    Read the article

  • Five geeky things you must do with your Android Smartphone

    - by Gopinath
    Android is the Windows of next generation. Its open, free, widely adopted and smart enough to outsmart Apple’s iOS. It’s a stolen product and cheap imitation of iOS, but Steve Job’s once quoted saying good artists copy and great artists steal. Alright, this post is not about Android vs iOS or is it really stolen or not. Android is a great OS for mobile devices and it lets you do amazing through mobiles.  In this post I want to write about the geeky things we can do with an Android Smartphone. Control your computer using mobile Assume that it is a lazy weekend and you are on a couch watching movies on a laptop which is a meter away. Now you want to adjust volume or skip a scene/song. How to control your laptop without moving out of couch? Just install Universal Remote free app on your smartphone and start control your computer using phone. Universal Remove app controls computers over Wifi or Bluetooth networks with dedicated remote controls for various media players and applications like YouTube, VLC & Spotify.  The application is very easy to use and works amazingly well in controlling computers. Few of the remote controls provided in the app are – Mouse, Keyboard, Media Controls, Power, Start, Windows Media Player, VLC Player,  YouTube. There is also paid version of this app with additional remotes, but for most of the users Free version is good enough. Stream YouTube videos playing on you mobile to computer You can stream YouTube videos playing on your mobile to computer/smart tv. This is something similar to Apple’s most popular AirPlay feature, but works only with YouTube videos. To start streaming videos install Google’s YouTube Remote on your smartphone, open youtube.com/leanback on your computer  and pair up mobile with computer. Once the pairing is done, videos played on YouTube Remote app will be streamed on to your computer. Access your mobile using any web browser – send/receive SMS, view photos/call logs, etc. Want to control your mobile phone using a computer? Install AirDroid app on your phone and start controlling your phone using computer browser – send and receive messages, view call logs, play music, upload/download files, edit contacts and many more. At times it’s lot of fun to access mobile using a big screen devices like laptops. Launch a webpage on your mobile browser using your computer With Google Chrome to Phone installed on your computer and mobile, you can send links and other information from Chrome browser to your Android device. With a click on Chrome browser, the current webpage of Chrome browser will be automatically launched on Android device. This is very handy when you want to send links, send driving direction to mobile using Google Maps and launch phone dialer with number selected on webpage. Install Apps on mobile using computer To install apps on your smartphone you really don’t need to touch it. Open any web browser, sing in to Google Play with your Google id that is associated with smartphone and start installing apps on to your phone right from the browser. As you browse apps on Google Play store, you find Install button and all you need to do is to just click Install. Google will automatically installs app on your mobile within few seconds.

    Read the article

  • WebSocket Samples in GlassFish 4 build 66 - javax.websocket.* package: TOTD #190

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket TOTD #189: Collaborative Whiteboard using WebSocket in GlassFish 4 The earlier blogs created a WebSocket endpoint as: import javax.net.websocket.annotations.WebSocketEndpoint;@WebSocketEndpoint("websocket")public class MyEndpoint { . . . Based upon the discussion in JSR 356 EG, the package names have changed to javax.websocket.*. So the updated endpoint definition will look like: import javax.websocket.WebSocketEndpoint;@WebSocketEndpoint("websocket")public class MyEndpoint { . . . The POM dependency is: <dependency> <groupId>javax.websocket</groupId> <artifactId>javax.websocket-api</artifactId> <version>1.0-b09</version> </dependency> And if you are using GlassFish 4 build 66, then you also need to provide a dummy EndpointFactory implementation as: import javax.websocket.WebSocketEndpoint;@WebSocketEndpoint(value="websocket", factory=MyEndpoint.DummyEndpointFactory.class)public class MyEndpoint { . . .   class DummyEndpointFactory implements EndpointFactory {    @Override public Object createEndpoint() { return null; }  }} This is only interim and will be cleaned up in subsequent builds. But I've seen couple of complaints about this already and so this deserves a short blog. Have you been tracking the latest Java EE 7 implementations in GlassFish 4 promoted builds ?

    Read the article

  • C# XNA: Effecient mesh building algorithm for voxel based terrain ("top" outside layer only, non-destructible)

    - by Tim Hatch
    To put this bluntly, for non-destructible/non-constructible voxel style terrain, are generated meshes handled much better than instancing? Is there another method to achieve millions of visible quad faces per scene with ease? If generated meshes per chunk is the way to go, what kind of algorithm might I want to use based on only EVER needing the outer layer rendered? I'm using 3D Perlin Noise for terrain generation (for overhangs/caves/etc). The layout is fantastic, but even for around 20k visible faces, it's quite slow using instancing (whether it's one big draw call or multiple smaller chunks). I've simplified it to the point of removing non-visible cubes and only having the top faces of my cube-like terrain be rendered, but with 20k quad instances, it's still pretty sluggish (30fps on my machine). My goal is for the world to be made using quite small cubes. Where multiple games (IE: Minecraft) have the player 1x1 cube in width/length and 2 high, I'm shooting for 6x6 width/length and 9 high. With a lot of advantages as far as gameplay goes, it also means I could quite easily have a single scene with millions of truly visible quads. So, I have been trying to look into changing my method from instancing to mesh generation on a chunk by chunk basis. Do video cards handle this type of processing better than separate quads/cubes through instancing? What kind of existing algorithms should I be looking into? I've seen references to marching cubes a few times now, but I haven't spent much time investigating it since I don't know if it's the better route for my situation or not. I'm also starting to doubt my need of using 3D Perlin noise for terrain generation since I won't want the kind of depth it would seem best at. I just like the idea of overhangs and occasional cave-like structures, but could find no better 'surface only' algorithms to cover that. If anyone has any better suggestions there, feel free to throw them at me too. Thanks, Mythics

    Read the article

  • CodePlex Daily Summary for Wednesday, July 04, 2012

    CodePlex Daily Summary for Wednesday, July 04, 2012Popular ReleasesMVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...BlackJumboDog: Ver5.6.6: 2012.07.03 Ver5.6.6 (1) ???????????ftp://?????????、????LIST?????Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...DynamicToSql: DynamicToSql 1.0.0 (beta): 1.0.0 beta versionCommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsnopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.60: Highlight features & improvements: • Significant performance optimization. • Use AJAX for adding products to the cart. • New flyout mini-shopping cart. • Auto complete suggestions for product searching. • Full-Text support. • EU cookie law support. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).THE NVL Maker: The NVL Maker Ver 3.51: http://download.codeplex.com/Download?ProjectName=nvlmaker&DownloadId=371510 ????:http://115.com/file/beoef05k#THE-NVL-Maker-ver3.51-sim.7z ????:http://www.mediafire.com/file/6tqdwj9jr6eb9qj/THENVLMakerver3.51tra.7z ======================================== ???? ======================================== 3.51 beta ???: ·?????????????????????? ·?????????,?????????0,?????????????????????? ·??????????????????????????? ·?????????????TJS????(EXP??) ·??4:3???,???????????????,??????????? ·?????????...????: ????2.0.3: 1、???????????。 2、????????。 3、????????????。 4、bug??,????。AssaultCube Reloaded: 2.5 Intrepid: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) You should delete /home/config/saved.cfg to reset binds/other stuff If you us...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.0: User Right Licensing ContentType version 2.0.267.1Bongiozzo Photosite: Alpha: Just first stable releaseMDS MODELING WORKBOOK: MDS MODELING WORKBOOK: This is the initial release. Works with SQL 2008 R2 Master Data Services. Also works with SQL 2012 Master Data Services but has not been completely tested.Logon Screen Launcher: Logon Screen Launcher 1.3.0: FIXED - Minor handle leak issueBF3Rcon.NET: BF3Rcon.NET 25.0: This update brings the library up to server release R25, which includes the few additions from R21. There are also some minor bug fixes and a couple of other minor changes. In addition, many methods now take advantage of the RconResult class, which will give error information on failed requests; this replaces the bool returned by many methods. There is also an implicit conversion from RconResult to bool (both of which were true on success), so old code shouldn't break. ChangesAdded Player.S...TelerikMvcGridCustomBindingHelper: Version 1.0.15.183-RC: TelerikMvcGridCustomBindingHelper 1.0.15.183 RC This is a RC (release candidate) version, please test and report any error or problem you encounter. Warning: There are many changes in this release and some of them break backward compatibility. Release notes (since 0.5.0-Alpha version): Custom aggregates via an inherited class or inline fluent function Ignore group on aggregates for better performance Projections (restriction of the database columns queried) for an even better performa...PunkBuster™ Screenshot Viewer: PunkBuster™ Screenshot Viewer 1.0: First release of PunkBuster™ Screenshot ViewerDesigning Windows 8 Applications with C# and XAML: Chapters 1 - 7 Release Preview: Source code for all examples from Chapters 1 - 7 for the Release PreviewNew ProjectsAzureMVC4: hiBoonCraft Launcher: BoonCraft Launcher V2.0 See http://352n.dyndns.org for more info on BoonCraftC# to Javascript: Have you ever wanted to automagically have access to the enums you use in your .NET code in the javascript code you're writing for client-side?CMCIC payment gateway provider for NB_Store: CMCIC payment gateway provider for NB_StoreCOFE2 : Cloud Over IFileSystemInfo Entries Extensions: COFE2 enable user to access the user-defined file system on local or foreign computer, using a System.IO-like interface or a RESTful Web API.Directory access via LDAP: .NET library for managing a directory via LDAP.E-mail processing: .NET library for processing e-mail.FAST Search for SharePoint Query Statistics: F4SP Query Statistics scans the FAST for SharePoint Query Logs and presents statistics based on the logs. Total Queries, Top Queries, Queries per user etc...File Backup: This project is an open source windows azure cloud backup win forms application.HanxiaoFu's personal: This will help synchronizing my work done in home and at workLifekostyuk: This is my first project on TFSNet WebSocket Server: NetWebSocket Server is c# based hight performance and scalable Websocket server. Posroid for Windows 8: ?? ??????? ????? ?? ?? ????? ??? ?? ??? ??? ???? ? ? ???, ???8??? ???? ?? ??? ? ? ??? ??? ????? ?????.PowerRules: PowerRules is a group of scripts that help you audit your farm for Configuration Drift (Configuration changes over time)Projet Niloc TETRAS: Student Project to know how to manage and coordinating a team.proyectobanco: PROFE AQUI ESTA EL PROYECTO DISCULPE NOMAS ATT SANCANsheetengine - Isometric HTML5 JavaScript Display Engine: Sheetengine is an HTML5 canvas based isometric display engine for JavaScript. It features textures, z-ordering, shadows, intersecting sheets, object movements.Shiny2: GTS Spoofing program for Generation IV and V of Pokemon.SMS Backup & Restore XML to MySQL: The purpose is to take the XML files created by SMS Backup and Restore (Android) and importing them via a Dropbox/Google Drive synch into a MySQL dbStundenplan TSST: App für Windows Phone um die einzelnen Vertretungspläne der technischen Schule Steinfurt anzusehenswalmacenamiento: Proyecto para el almacenamiento de registrosTFS Work Item Association Check-in Policy: This policy requires TFS source control check-ins to be associated with a single, in-progress task that is assigned to you.TurboTemplate: TurboTemplate is a fast source code generation helper which quick transforms between your SQL database and some templated text of your choice.visblog: this is short summary of my projectVisualHG_fliedonion: This is fork of VisualHG. This will used by improve VisualHG for me. support only Visual Studio 2008 (not SP1). Wave Tag Library: A very simple and modest .wav file tag library. With this library you can load .wav files, edit the tags (equivalent to mp3's ID3 tags) and save back to file.Wordpress: WordPress is web software you can use to create a beautiful website or blog. We like to say that WordPress is both free and priceless at the same time.ZEAL-C02 Bluetooth module Driver for Netduino: A class library for the .NET Micro Framework to support the Zeal-C02 Bluetooth module for Netduino.

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >