Search Results

Search found 32252 results on 1291 pages for 'software services'.

Page 402/1291 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • IPgallery banks on Solaris SPARC

    - by Frederic Pariente
    IPgallery is a global supplier of converged legacy and Next Generation Networks (NGN) products and solutions, including: core network components and cloud-based Value Added Services (VAS) for voice, video and data sessions. IPgallery enables network operators and service providers to offer advanced converged voice, chat, video/content services and rich unified social communications in a combined legacy (fixed/mobile), Over-the-Top (OTT) and Social Community (SC) environments for home and business customers. Technically speaking, this offer is a scalable and robust telco solution enabling operators to offer new services while controlling operating expenses (OPEX). In its solutions, IPgallery leverages the following Oracle components: Oracle Solaris, Netra T4 and SPARC T4 in order to provide a competitive and scalable solution without the price tag often associated with high-end systems. Oracle Solaris Binary Application Guarantee A unique feature of Oracle Solaris is the guaranteed binary compatibility between releases of the Solaris OS. That means, if a binary application runs on Solaris 2.6 or later, it will run on the latest release of Oracle Solaris.  IPgallery developed their application on Solaris 9 and Solaris 10 then runs it on Solaris 11, without any code modification or rebuild. The Solaris Binary Application Guarantee helps IPgallery protect their long-term investment in the development, training and maintenance of their applications. Oracle Solaris Image Packaging System (IPS) IPS is a new repository-based package management system that comes with Oracle Solaris 11. It provides a framework for complete software life-cycle management such as installation, upgrade and removal of software packages. IPgallery leverages this new packaging system in order to speed up and simplify software installation for the R&D and production environments. Notably, they use IPS to deliver Solaris Studio 12.3 packages as part of the rapid installation process of R&D environments, and during the production software deployment phase, they ensure software package integrity using the built-in verification feature. Solaris IPS thus improves IPgallery's time-to-market with a faster, more reliable software installation and deployment in production environments. Extreme Network Performance IPgallery saw a huge improvement in application performance both in CPU and I/O, when running on SPARC T4 architecture in compared to UltraSPARC T2 servers.  The same application (with the same activation environment) running on T2 consumes 40%-50% CPU, while it consumes only 10% of the CPU on T4. The testing environment comprised of: Softswitch (Call management), TappS (Telecom Application Server) and Billing Server running on same machine and initiating various services in capacity of 1000 CAPS (Call Attempts Per Second). In addition, tests showed a huge improvement in the performance of the TCP/IP stack, which reduces network layer processing and in the end Call Attempts latency. Finally, there is a huge improvement within the file system and disk I/O operations; they ran all tests with maximum logging capability and it didn't influence any benchmark values. "Due to the huge improvements in performance and capacity using the T4-1 architecture, IPgallery has engineered the solution with less hardware.  This means instead of deploying the solution on six T2-based machines, we will deploy on 2 redundant machines while utilizing Oracle Solaris Zones and Oracle VM for higher availability and virtualization" Shimon Lichter, VP R&D, IPgallery In conclusion, using the unique combination of Oracle Solaris and SPARC technologies, IPgallery is able to offer solutions with much lower TCO, while providing a higher level of service capacity, scalability and resiliency. This low-OPEX solution enables the operator, the end-customer, to deliver a high quality service while maintaining high profitability.

    Read the article

  • DotNetNuke 7.0 Only Weeks Away!

    - by sbwalker
    The software industry moves at a lightning pace, and it is only through constant focus and continuous investment that a software product can remain both stable and relevant over the long term. As we approach the 10 Year Anniversary of the DotNetNuke platform, it seems only fitting that we are on the verge of announcing yet another significant product milestone. DotNetNuke 7.0 is just around the corner and represents a bold step forward for our Content Management Platform, including substantial business productivity enhancements, investments in web platform relevance, and a significant overhaul and modernization of the user interface and user experience. It has been five months since I posted the announcement that the next major version of the platform was going to be DotNetNuke 7.0.  This announcement created tremendous excitement and anticipation in the DotNetNuke community, as major version increments have always been utilized as an opportunity  to introduce revolutionary new product features and capabilities. After months of intense product development, the finish line is finally in sight. With that, I am pleased to announce that we released a Release Candidate (RC) of DotNetNuke 7.0 yesterday. You can download the RC from our project page on Codeplex. A Release Candidate represents a software version which is very near to “release” quality. So although we will not be officially endorsing the RC for production use, or providing an official upgrade path, it does represent a significant milestone in our software development efforts ( if you are looking for a more detailed explanation of our software release terminology, I would encourage you to read the blog written by Co-Founder, Joe Brinkman titled "What's In A Name?" ). Modernizing a software platform does have its share of challenges from a backward compatibility perspective and, as usual, we are taking great care in ensuring a seamless upgrade path for our customers. In order to remain relevant and progressive, you need to be aware that DotNetNuke 7.0 has adopted a new set of baseline infrastructure requirements including ASP.NET 4.0.  As a result we are encouraging all major stakeholders in the ecosystem ( module developers, designers, partners, customers, etc... ) to take the opportunity to install the RC in their own local environments. This is the last opportunity to let us know about any final issues which may need to be addressed prior to final release. Mark your calendars now… the expected public release date (RTM) for DotNetNuke 7.0 will be Wednesday, November 28th. On a side note, we expect to release a 6.2.5 Maintenance version today. This release contains some high priority product quality improvements as well as security patches for some vulnerabilities reported through our standard ecosystem channels. As a result we will be encouraging all of our customers to upgrade to the 6.2.5 release as soon as it is available. I hope everyone is as excited as I am about the upcoming DotNetNuke 7.0 release. Please take the opportunity over the next week to put the new platform through its paces. Remember, only through our collective efforts can we ensure that this release has the greatest market impact of any DotNetNuke release to date.

    Read the article

  • Automated SSRS deployment with the RS utility

    - by Stacy Vicknair
    If you’re familiar with SSRS and development you are probably aware of the SSRS web services. The RS utility is a tool that comes with SSRS that allows for scripts to be executed against against the SSRS web service without needing to create an application to consume the service. One of the better benefits of using this format rather than writing an application is that the script can be modified by others who might be involved in the creation and addition of scripts or management of the SSRS environment.   Reporting Services Scripter Jasper Smith from http://www.sqldbatips.com created Reporting Services Scripter to assist with the created of a batch process to deploy an entire SSRS environment. The helper scripts below were created through the modification of his generated scripts. Why not just use this tool? You certainly can. For me, the volume of scripts generated seems less maintainable than just using some common methods extracted from these scripts and creating a deployment in a single script file. I would, however, recommend this as a product if you do not think that your environment will change drastically or if you do not need to deploy with a higher level of control over the deployment. If you just need to replicate, this tool works great. Executing with RS.exe Executing a script against rs.exe is fairly simple. The Script Half the battle is having a starting point. For the scripting I needed to do the below is the starter script. A few notes: This script assumes integrated security. This script assumes your reports have one data source each. Both of the above are just what made sense for my scenario and are definitely modifiable to accommodate your needs. If you are unsure how to change the scripts to your needs, I recommend Reporting Services Scripter to help you understand how the differences. The script has three main methods: CreateFolder, CreateDataSource and CreateReport. Scripting the server deployment is just a process of recreating all of the elements that you need through calls to these methods. If there are additional elements that you need to deploy that aren’t covered by these methods, again I suggest using Reporting Services Scripter to get the code you would need, convert it to a repeatable method and add it to this script! Public Sub Main() CreateFolder("/", "Data Sources") CreateFolder("/", "My Reports") CreateDataSource("/Data Sources", "myDataSource", _ "Data Source=server\instance;Initial Catalog=myDatabase") CreateReport("/My Reports", _ "MyReport", _ "C:\myreport.rdl", _ True, _ "/Data Sources", _ "myDataSource") End Sub   Public Sub CreateFolder(parent As String, name As String) Dim fullpath As String = GetFullPath(parent, name) Try RS.CreateFolder(name, parent, GetCommonProperties()) Console.WriteLine("Folder created: {0}", name) Catch e As SoapException If e.Detail.Item("ErrorCode").InnerText = "rsItemAlreadyExists" Then Console.WriteLine("Folder {0} already exists and cannot be overwritten", fullpath) Else Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End If End Try End Sub   Public Sub CreateDataSource(parent As String, name As String, connectionString As String) Try RS.CreateDataSource(name, parent,False, GetDataSourceDefinition(connectionString), GetCommonProperties()) Console.WriteLine("DataSource {0} created successfully", name) Catch e As SoapException Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End Try End Sub   Public Sub CreateReport(parent As String, name As String, location As String, overwrite As Boolean, dataSourcePath As String, dataSourceName As String) Dim reportContents As Byte() = Nothing Dim warnings As Warning() = Nothing Dim fullpath As String = GetFullPath(parent, name)   'Read RDL definition from disk Try Dim stream As FileStream = File.OpenRead(location) reportContents = New [Byte](stream.Length-1) {} stream.Read(reportContents, 0, CInt(stream.Length)) stream.Close()   warnings = RS.CreateReport(name, parent, overwrite, reportContents, GetCommonProperties())   If Not (warnings Is Nothing) Then Dim warning As Warning For Each warning In warnings Console.WriteLine(Warning.Message) Next warning Else Console.WriteLine("Report: {0} published successfully with no warnings", name) End If   'Set report DataSource references Dim dataSources(0) As DataSource   Dim dsr0 As New DataSourceReference dsr0.Reference = dataSourcePath Dim ds0 As New DataSource ds0.Item = CType(dsr0, DataSourceDefinitionOrReference) ds0.Name=dataSourceName dataSources(0) = ds0     RS.SetItemDataSources(fullpath, dataSources)   Console.Writeline("Report DataSources set successfully")       Catch e As IOException Console.WriteLine(e.Message) Catch e As SoapException Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End Try End Sub     Public Function GetCommonProperties() As [Property]() 'Common CatalogItem properties Dim descprop As New [Property] descprop.Name = "Description" descprop.Value = "" Dim hiddenprop As New [Property] hiddenprop.Name = "Hidden" hiddenprop.Value = "False"   Dim props(1) As [Property] props(0) = descprop props(1) = hiddenprop Return props End Function   Public Function GetDataSourceDefinition(connectionString as String) Dim definition As New DataSourceDefinition definition.CredentialRetrieval = CredentialRetrievalEnum.Integrated definition.ConnectString = connectionString definition.Enabled = True definition.EnabledSpecified = True definition.Extension = "SQL" definition.ImpersonateUser = False definition.ImpersonateUserSpecified = True definition.Prompt = "Enter a user name and password to access the data source:" definition.WindowsCredentials = False definition.OriginalConnectStringExpressionBased = False definition.UseOriginalConnectString = False Return definition End Function   Private Function GetFullPath(parent As String, name As String) As String If parent = "/" Then Return parent + name Else Return parent + "/" + name End If End Function

    Read the article

  • Do you want to be an ALM Consultant?

    - by Martin Hinshelwood
    Northwest Cadence is looking for our next great consultant! At Northwest Cadence, we have created a work environment that emphasizes excellence, integrity, and out-of-the-box thinking.  Our customers have high expectations (rightfully so) and we wouldn’t have it any other way!   Northwest Cadence has some of the most exciting customers I have ever worked with and even though I have only been here just over a month I have already: Provided training/consulting for 3 government departments Created and taught courseware for delivering Scrum to teams within a high profile multinational company Started presenting Microsoft's ALM Engagement Program  So if you are interested in helping companies build better software more efficiently, then.. Enquire at [email protected] Application Lifecycle Management (ALM) Consultant An ALM Consultant with a minimum of 8 years of relevant experience with Application Lifecycle Management, Visual Studio (including Visual Studio Team System) and software design is needed. Must provide thought leadership on best practices for enterprise architecture, understand the Microsoft technology solution stack, and have a thorough understanding of enterprise application integration. The ALM Practice Lead will play a central role in designing and implementing the overall ALM Practice strategy, including creating, updating, and delivering ALM courseware and consultancy engagements. This person will also provide project support, deliverables, and quality solutions on Visual Studio Team System that exceed client expectations. Engagements will vary and will involve providing expert training, consulting, mentoring, formulating technical strategies and policies and acting as a “trusted advisor” to customers and internal teams. Sound sense of business and technical strategy required. Strong interpersonal skills as well as solid strategic thinking are key. The ideal candidate will be capable of envisioning the solution based on the early client requirements, communicating the vision to both technical and business stakeholders, leading teams through implementation, as well as training, mentoring, and hands-on software development. The ideal candidate will demonstrate successful use of both agile and formal software development methods, enterprise application patterns, and effective leadership on prior projects. Job Requirements Minimum Education: Bachelor’s Degree (computer science, engineering, or math preferred). Locale / Travel: The Practice Lead position requires estimated 50% travel, most of which will be in the Continental US (a valid national Passport must be maintained).  This is a full time position and will be based in the Kirkland office. Preferred Education: Master’s Degree in Information Technology or Software Engineering; Premium Microsoft Certifications on .NET (MCSD) or MCPD or relevant experience; Microsoft Certified Trainer (MCT) or relevant experience. Minimum Experience and Skills: 7+ years experience with business information systems integration or custom business application design and development in a professional technology consulting, corporate MIS or software development environment. Essential Duties & Responsibilities: Provide training, consulting, and mentoring to organizations on topics that include Visual Studio Team System and ALM. Create content, including labs and demonstrations, to be delivered as training classes by Northwest Cadence employees. Lead development teams through the complete ALM and/or Visual Studio Team System solution. Be able to communicate in detail how a solution will integrate into the larger technical problem space for large, complex enterprises. Define technical solution requirements. Provide guidance to the customer and project team with respect to technical feasibility, complexity, and level of effort required to deliver a custom solution. Ensure that the solution is designed, developed and deployed in accordance with the agreed upon development work plan. Create and deliver weekly status reports of training and/or consulting progress. Engagement Responsibilities: · Provide a strong desire to provide thought leadership related to technology and to help grow the business. · Work effectively and professionally with employees at all levels of a customer’s organization. · Have strong verbal and written communication skills. · Have effective presentation, organizational and planning skills. · Have effective interpersonal skills and ability to work in a team environment. Enquire at [email protected]

    Read the article

  • Exalytics and Oracle Business Intelligence Enterprise Edition (OBIEE) Partner Workshop

    - by mseika
    Workshop Description Oracle Fusion Middleware 11g is the #1 application infrastructure foundation. It enables enterprises to create and run agile and intelligent business applications and maximize IT efficiency by exploiting modern hardware and software architectures. Oracle Exalytics Business Intelligence Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers unmatched speed, visualizations and scalability for Business Intelligence and Enterprise Performance Management applications. This FREE hands-on, partner workshop highlights both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle’s proven Business Intelligence Foundation with enhanced visualization capabilities and performance optimizations. This workshop will provide hands-on experience with Oracle's latest engineered system. Topics covered will include TimesTen In-Memory Database and the new Summary Advisor for Exalytics, the technical details (including mobile features) of the latest release of visualization enhancements for OBI-EE, and technical updates on Essbase. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution. You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics.If you are a BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Register Now! Presentations Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics Lab OutlineThe labs showcase Oracle Exalytics core components and functionality and provide expertise of Oracle Business Intelligence 11.1.1.6 new features and updates from prior releases. The hands-on activities are based on an Oracle VirtualBox image with software and training samples pre-installed. Lab Environment Setup Creating and Working with Oracle TimesTen In-Memory Database Running Summary Advisor Utility Working with Exalytics Visualization Features – Dashboard and Analysis Interactions Audience Oracle Partners BI and EPM Application Developers and Implementers System Integrators and Solution Consultants Data Warehouse Developers Enterprise Architects Prerequisites Experience and understanding of OBIEE 11g is required Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11gIntroduction Workshop is highly recommended Good understanding of data warehousing and data modeling for reporting and analysis purpose Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops for this class.Attendee laptops must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop. Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Schedule This workshop is 3 days. - Times vary by country!9:00am: Sign-in and technical setup 9:30am: Workshop starts 5:00pm: Workshop ends Oracle Exalytics and Business Intelligence (OBIEE) Workshop December 11-13, 2012: Oracle BVP, Birmingham, UK Register Here. Questions? Send email to: [email protected] Oracle Platform Technologies Enablement Services

    Read the article

  • GoldenGate 12c - MySQL Active-Active Replication Setup

    - by Jinyu Wang-Oracle
    Active-active  (also called Master-Master or Bi-Directional) replication captures data changes from two or more systems and replicat the changes to synchronize the data.  Active-Active replication is often needed for high availability, load balancing and scaling out purposes.   Oracle GoldenGate is known to be one of the first and the best replication tool handling active-active replications. As of Oracle GoldenGate 12c, it provides (Refer to Oracle GoldenGate 12.1.2 Documentation - Configuring Oracle GoldenGate for Active-Active High Availability for more information) the followings: Robust loop-back prevention Comprehensive conflict resolution and detection support Heterogeneous support across different database versions and operation systems.  Oracle GoldenGate supports active-active configurations for DB2 on z/OS, LUW, and IBM i, MySQL, Oracle, SQL/MX,SQL Server, Sybase, and Teradata. However, the setup is different from database to database. In this example, I will show you how to setup an active-active data replication between two MySQL database instances. The example setup below is to have active-active replication between MySQL 5.5 and MySQL 5.6 instances and is shown as follows: MySQL 5.5 (Manager Port: 15105)  Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3305 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, PASSWORD mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl REPORTROLLOVER AT 05:30 ON saturday TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15106, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3305 targetdb test, userid root, password mysql sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge REPERROR (DEFAULT, ABEND) REPERROR(1062, IGNORE) map test.TCUSTMER, target test.TCUSTMER,colmap(usedefaults, region_code="region code"); map test.TCUSTORD, target test.TCUSTORD; MySQL 5.6 (Manager Port: 15106) Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3306 targetdb test, userid root, password mysql --assumetargetdefs sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge map test.TCUSTMER, target test.TCUSTMER, colmap(usedefaults, "region code"=region_code); map test.TCUSTORD, target test.TCUSTORD; Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3306 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, USERID mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/usr/local/mysql56/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15105, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; The setup parameters are quite self-explanatory. The key setup is to avoid the replication data  looping. Oracle GoldenGate for MySQL uses the information in the replication checkpoint table to identify the transaction applied by replicats and thus avoid extracting those transactions by Oracle GoldenGate extracts. The example setup in the extract in MySQL 5.5 instance is shown as follows.  TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl Setting up an active-active replication is often more complicated than this and requires the following additional considerations. I would elaborate on this in the follow-up discussions. 

    Read the article

  • Eclipse no longer useful

    - by dgood1
    When I got my Eclipse from the Ubuntu Software Center, it was good and worked fine. I could work on Java projects fine. This week I was required to add ADT and tried the ADT-bundle, assuming it had everthing I needed, seeing that the SDK had more steps. So now, I can create Android apps using the ADT-bundle. I tried to work on my java projects again and I now discovered: I can't run my java projects: "The selection cannot be launched. And there are no recent launches." error. I also believe Eclipse doesn't know it's a java program because it all in black and white. Not the usual green/blue/red/black things when making comments, variables and Strings. I can't make new projects of ANYTHING unless I use the adt-bundle. New project only offers CVS (whatever that is) My perspectives seem limited. I remembered more choices and now I'm limited to [Java], Resource, CVS Repository, debug, Team Sync. I was told to be able to use perspectives to swap between Android and Java developing. Even after the ADT installation using "Install new Software",nothing. I can't uninstall/purge/remove Eclipse via the terminal. I tried removing it then reinstalling it via the Ubuntu Software Cetner. No results other than it's temporary removal. (Possibly unrelated) A large number of repositories are not found when updating Eclipse. (See Step 8 in Summary of what I did...) Although, on checking the versions and installation history, I confirmed Android and Java are installed. It probably just doesn't know it's there. Eclipse Indigo: Version: 3.7.2 Build id: I20110613-1736 Summary of what I did before and during the problem: Downloaded adt-bundle. Attempted instructions from teacher. (Install new Software) (Failed but other than an annoying "can't find repository" during each update, no damage to report) (Fixed) Ran "eclipse" executable from the adt-bundle. Updated Eclipse. (After restart, I noticed the problem) NOTE: other than window arrangement, I had no customizations. Played around with the Windowspreferences and Projectpropertied. Restored to default settings after no results. Tried "apt-get purge eclipse". Couldn't find Eclipse so, nothing happened. Used Software center. No results. Tried swapping workspaces. I tried different folder, deeper folder, renaming. All return the same problem. Deleted adt-bundle (browsed folders then delete). Got Adt-sdk only. Installed. Can't find any changes other than some disk space usage. Of course, I can't make Android apps until I unzip the bundle again. WindowsPreferencesInstall/UpdateAvailable Software Sites, Checked as many repositories as possible, then updated. Still nothing. I'm about to get a second try on uninstalling it, because I think my last action will just be taking up space. But I'll wait for tomorrow, in case the answer will help. Any thoughts?

    Read the article

  • Updated SOA Documents now available in ITSO Reference Library

    - by Bob Rhubart
    Nine documents within the IT Strategies from Oracle (ITSO) reference library have recently been updated. (Access to the ITSO collection is free to registered Oracle.com members -- and that membership is free.) All nine documents fall within the Service Oriented Architecture section of the ITSO collection, and cover the following topics: SOA Practitioner Guides Creating an SOA Roadmap (PDF, 54 pages, published: February 2012) The secret to successful SOA is to build a roadmap that can be successfully executed. SOA offers an opportunity to adopt an iterative technique to deliver solutions incrementally. This document offers a structured, iterative methodology to help you stay focused on business results, mitigate technology and organizational risk, and deliver successful SOA projects. A Framework for SOA Governance (PDF, 58 pages, published: February 2012) Successful SOA requires a strong governance strategy that designs-in measurement, management, and enforcement procedures. Enterprise SOA adoption introduces new assets, processes, technologies, standards, roles, etc. which require application of appropriate governance policies and procedures. This document offers a framework for defining and building a proper SOA governance model. Determining ROI of SOA through Reuse (PDF, 28 pages, published: February 2012) SOA offers the opportunity to save millions of dollars annually through reuse. Sharing common services intuitively reduces workload, increases developer productivity, and decreases maintenance costs. This document provides an approach for estimating the reuse value of the various software assets contained in a typical portfolio. Identifying and Discovering Services (PDF, 64 pages, published: March 2012) What services should we build? How can we promote the reuse of existing services? A sound approach to answer these questions is a primary measure for the success of a SOA initiative. This document describes a pragmatic approach for collecting the necessary information for identifying proper services and facilitating service reuse. Software Engineering in an SOA Environment (PDF, 66 pages, published: March 2012) Traditional software delivery methods are too narrowly focused and need to be adjusted to enable SOA. This document describes an engineering approach for delivering projects within an SOA environment. It identifies the unique software engineering challenges faced by enterprises adopting SOA and provides a framework to remove the hurdles and improve the efficiency of the SOA initiative. SOA Reference Architectures SOA Foundation (PDF, 70 pages, published: February 2012) This document describes they key tenets for SOA design, development, and execution environments. Topics include: service definition, service layering, service types, the service model, composite applications, invocation patterns, and standards. SOA Infrastructure (PDF, 86 pages, published: February 2012) Properly architected, SOA provides a robust and manageable infrastructure that enables faster solution delivery. This document describes the role of infrastructure and its capabilities. Topics include: logical architecture, deployment views, and Oracle product mapping. SOA White Papers and Data Sheets Oracle's Approach to SOA (white paper) (PDF, 14 pages, published: February 2012) Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies to help customers successfully adopt SOA and realize measureable business benefits. This executive datasheet and whitepaper describe Oracle's proven approach to SOA. Oracle's Approach to SOA (data sheet) (PDF, 3 pages, published: March 2012) SOA adoption is complex and success is far from assured. This is why Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies, to help customers successfully adopt SOA and realize measurable business benefits. This data sheet provides an executive overview of Oracle's proven approach to SOA.

    Read the article

  • Deliberate Practice

    - by Jeff Foster
    It’s easy to assume, as software engineers, that there is little need to “practice” writing code. After all, we write code all day long! Just by writing a little each day, we’re constantly learning and getting better, right? Unfortunately, that’s just not true. Of course, developers do improve with experience. Each time we encounter a problem we’re more likely to avoid it next time. If we’re in a team that deploys software early and often, we hone and improve the deployment process each time we practice it. However, not all practice makes perfect. To develop true expertise requires a particular type of practice, deliberate practice, the only goal of which is to make us better programmers. Everyday software development has other constraints and goals, not least the pressure to deliver. We rarely get the chance in the course of a “sprint” to experiment with potential solutions that are outside our current comfort zone. However, if we believe that software is a craft then it’s our duty to strive continuously to raise the standard of software development. This requires specific and sustained efforts to get better at something we currently can’t do well (from Harvard Business Review July/August 2007). One interesting way to introduce deliberate practice, in a sustainable way, is the code kata. The term kata derives from martial arts and refers to a set of movements practiced either solo or in pairs. One of the better-known examples is the Bowling Game kata by Bob Martin, the goal of which is simply to write some code to do the scoring for 10-pin bowling. It sounds too easy, right? What could we possibly learn from such a simple example? Trust me, though, that it’s not as simple as five minutes of typing and a solution. Of course, we can reach a solution in a short time, but the important thing about code katas is that we explore each technique fully and in a controlled way. We tackle the same problem multiple times, using different techniques and making different decisions, understanding the ramifications of each one, and exploring edge cases. The short feedback loop optimizes opportunities to learn. Another good example is Conway’s Game of Life. It’s a simple problem to solve, but try solving it in a functional style. If you’re used to mutability, solving the problem without mutating state will push you outside of your comfort zone. Similarly, if you try to solve it with the focus of “tell-don’t-ask“, how will the responsibilities of each object change? As software engineers, we don’t get enough opportunities to explore new ideas. In the middle of a development cycle, we can’t suddenly start experimenting on the team’s code base. Code katas offer an opportunity to explore new techniques in a safe environment. If you’re still skeptical, my challenge to you is simply to try it out. Convince a willing colleague to pair with you and work through a kata or two. It only takes an hour and I’m willing to bet you learn a few new things each time. The next step is to make it a sustainable team practice. Start with an hour every Friday afternoon (after all who wants to commit code to production just before they leave for the weekend?) for month and see how that works out. Finally, consider signing up for the Global Day of Code Retreat. It’s like a daylong code kata, it’s on December 8th and there’s probably an event in your area!

    Read the article

  • Planning development when academic research is involved

    - by Another Anonymous User
    Dear fellow programmers, how do you do "software planning" when academic research is involved? And, on a side note, how do you convince your boss that writing software is not like building a house and it's more like writing a novel? The gory details are below. I am in charge of a small dev team working in a research lab. We started developing a software with the purpose of going public one day (i.e. sell and make money off that). Such software depends on, amongst other things, at least two independent research lines: that is, there are at least two Ph.D. candidates that will, hopefully, one day come out with a working implementation of what we need. The main software depends also on other, more concrete resources that we as developers can take care of: graphics rendering, soft bodies deformation, etc. My boss asked me to write the specifications, requirements AND a bloody GANTT chart of the entire project. Faced with the fact that I don't have a clue about the research part, and that such research is fundamental for the software, he said "make assumptions." For the clarity of the argument, he is a professor whose Ph.D. students should come up with the research we need. And he comes from a strictly engineering background: plan everything first, write down specifications and only then write down code that "it's the last part". What I am doing now: I broke down the product in features; each 'feature' is, de facto, a separate product; Each feature is built on top of the previous one; Once a feature (A) has a working prototype the team can start working on the next feature (B), while QA for is being done for A (if money allows, more people can be brought in, etc.); Features that depend on research will come last: by then, hopefully, the research part will be completed (when is still a big question) ; Also, I set the team to use SCRUM for the development of 'version 1.0', due in a few months. This deadline could be set based on reasonable assumptions: we listed all required features, we counted our availability, and we gave a reasonable estimate. So my questions, again, are: How do I make my boss happy while at the same time get something out the door? How do I write specifications for something we -the developers- have no clue whether it's possible to do or not? (We still haven't decided which libraries to use for some tasks; we'll do so when we'll need to) How do I get the requirements for that, given that there are yet no clients nor investors, just lots of interests and promises? How do I get peace in the world? I am sure at least one of my questions will be answered :) ps: I am writing this anonymously since a potential investor might backfire if this is discovered. Hope you'll understand. However I must say I do not like this mentality of 'hiding the truth': this program will likely benefit many, and not being able to talk openly about this (with my name and my reputation attached) feels like censorship. But alas, I care more about your suggestions now.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • Axis2 embedded in my web app is not working

    - by Shamik
    OKay I lost alsmost the whole day on this. I have a webapp where I would like to add AXIS2 and start working. I added AxisServlets in the web.xml file like - <servlet> <servlet-name>AxisServlet</servlet-name> <display-name>Apache-Axis Servlet</display-name> <servlet-class>org.apache.axis2.transport.http.AxisServlet</servlet-class> <load-on-startup>2</load-on-startup> </servlet> <servlet-mapping> <servlet-name>AxisServlet</servlet-name> <url-pattern>/services/*</url-pattern> </servlet-mapping> I also added Services.xml file like <service name="ReportViewerService"> <description> This is a sample Web Service for illustrating Attachments API of Axis2 </description> <parameter name="ServiceClass">myclass</parameter> <operation name="getReport"> <actionMapping>urn:getReport</actionMapping> <messageReceiver class="org.apache.axis2.rpc.receivers.RPCMessageReceiver"/> </operation> </service> The directory structure is as mentioned here WEB-ING | - conf | |- axis2.xml |-lib | |- all libs |-services |-ReportViewerService | - META-INF |-services.xml |- web.xml The problem is - after all of these, the service endpoint will not come, I can not see the WSDL file http://localhost:8080/BOReportingServer/services/ReportViewerService?wsdl -- this gives an exception like - Throwable occurred: javax.servlet.ServletException: File &quot;/axis2-web/listSingleService.jsp&quot; not found

    Read the article

  • jQuery, jQuery UI, and Dual Licensed Plugins (Dual Licensing)

    - by John Hartsock
    OK I have read many posts regarding Dual Licensing using MIT and GPL licenses. But Im curious still, as the wording seems to be inclusive. Many of the Dual Licenses state that the software is licensed using "MIT AND GPL". The "AND" is what confuses me. It seems to me that the word "AND" in the terms, means you will be licensing the product using both licenses. Most of the posts, here on stackoverflow, state that you can license the software using one "OR" the other. JQuery specifically states "OR", whereas JQuery UI specifically States "AND". Another Instance of the "AND" would be JQGrid. Im not a lawyer but, it seems to me that a legal interpretation of this would state that use of the software would mean that your using the software under both licenses. Has anyone who has contacted a lawyer gotten clarification or a definitive answer as to what is true? Can you use Dual licensed software products that state "AND" in the terms of agreement under either license? EDITED: Guys here is specifically what Im talking about on jquery.org/license you see the following stated: You may use any jQuery project under the terms of either the MIT License or the GNU General Public License (GPL) Version 2 but in the header of Jquery's and Jquery UI library you see this: * Dual licensed under the MIT and GPL licenses. * http://docs.jquery.com/License The site says MIT or GPL but the license statement in the software says MIT and GPL.

    Read the article

  • Configuring multiple distinct WCF binding configurations causes an exception to be thrown

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Using multiple distinct TCP security binding configurations in a single WCF IIS-hosted WCF service a

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • SharePoint: Can't Connect to a Configuration Database When Using Configuration Wizard

    - by Denis
    Hello everyone. I wonder if you could help me with the following problem: We have a SharePoint farm consisted of two servers with an NLB (a load balancer), a database server and an index server (4 servers in total). The issues initially appeared when we were trying to change Search settings via Shared services provider and an error was appearing with the message I can’t even remember now. To fix that problem, we decided to restart Search services on the Index server via Central Administration… and the process had stuck with a status “stopping” for several days. We’ve found a few solutions on the forums, but nothing had helped. Then we’ve tried to exclude the Index server from the farm, but yet again, Search services would not stop. However, after a week we’ve noticed that the services finally stopped. That was a prologue. So, after the Search services have finally stopped, we decided to return the Index server back to the farm via the SharePoint Configuration wizard. When we click on a button “Search database instances” (don’t know the exact name in English, we have a Russian version), a correct database name and user are automatically appear in the relevant fields. Then we supply a password for the user and click “next” two times and the configuration begins. Unfortunately, at the second stage we receive the error “Could not connect to a configuration database” with the exception in the log file: “Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.” Now we can’t figure out why the configuration wizard can’t connect to the configuration database. Any suggestions will be really appreciated.

    Read the article

  • Silverlight 4 - MVC 2 ASP.NET Membership integration "single sign on"

    - by Scrappydog
    Scenario: I have an ASP.NET MVC 2 site using ASP.NET Forms Authentication. The site includes a Silverlight 4 application that needs to securely call internal web services. The web services also need to be publically exposed for third party authenticated access. Challenges: Securely accessing webservices from Silverlight using the current users identity without requiring the user to re-login in in the Silverlight application. Providing a secure way for third party applications to access the same webservices the same users credentials, ideally with out using ASP.NET Forms Authentication. Additional details and limitations: This application is hosted in Azure. We would rather NOT use RIA Services if at all possible. Solutions Under Consideration: I think that if the webservices are part of the same MVC site that hosts the Silverlight application then forms authentication should probably "just work" from Silverlight based on the users forms auth cookies. But this seems to rule out the possibility of hosting the webservices seperately (which is desirable in our scenario). For third-party access to the web services I'm guessing that seperate endpoints with a different authenication solution is probably the right answer, but I would rather only support one version of the services if possible... Questions: Can anybody point me towards any sample applications that implements something like this? How would you recommend implementing this solution?

    Read the article

  • Trouble determining proper decoding of a REST response from an ArcGIS REST service using IHttpModule

    - by Ryan Taylor
    First a little background on what I am trying to achieve. I have an application that is utilizing REST services served by ArcGIS Server and IIS7. The REST services return data in one of several different formats. I am requesting a JSON response. I want to be able to modify the response (remove or add parameters) before the response is sent to the client. However, I am having difficulty converting the stream to a string that I can modify. To that end, I have implemented the following code in order to try to inspect the stream. SecureModule.cs using System; using System.Web; namespace SecureModuleTest { public class SecureModule : IHttpModule { public void Init(HttpApplication context) { context.BeginRequest += new EventHandler(OnBeginRequest); } public void Dispose() { } public void OnBeginRequest(object sender, EventArgs e) { HttpApplication application = (HttpApplication) sender; HttpContext context = application.Context; HttpRequest request = context.Request; HttpResponse response = context.Response; response.Filter = new ServicesFilter(response.Filter); } } } ServicesFilter.cs using System; using System.IO; using System.Text; namespace SecureModuleTest { class ServicesFilter : MemoryStream { private readonly Stream _outputStream; private StringBuilder _content; public ServicesFilter(Stream output) { _outputStream = output; _content = new StringBuilder(); } public override void Write(byte[] buffer, int offset, int count) { _content.Append(Encoding.UTF8.GetString(buffer, offset, count)); using (TextWriter textWriter = new StreamWriter(@"C:\temp\content.txt", true)) { textWriter.WriteLine(String.Format("Buffer: {0}", _content.ToString())); textWriter.WriteLine(String.Format("Length: {0}", buffer.Length)); textWriter.WriteLine(String.Format("Offset: {0}", offset)); textWriter.WriteLine(String.Format("Count: {0}", count)); textWriter.WriteLine(""); textWriter.Close(); } // Modify response _outputStream.Write(buffer, offset, count); } } } The module is installed in the /ArcGIS/rest/ virtual directory and is executed via the following GET request. http://localhost/ArcGIS/rest/services/?f=json&pretty=true The web page displays the expected response, however, the text file tells a very different (encoded?) story. Expect Response {"currentVersion" : "10.0", "folders" : [], "services" : [ ] } Text File Contents Buffer: ? ?`I?%&/m?{J?J??t??`$?@??????iG#)?*??eVe]f@????{???{???;?N'????\fdl??J??!????~|?"~?G?u]???'?)??G?????G??7N????W??{?????,??|?OR????q? Length: 4096 Offset: 0 Count: 168 Buffer: ? ?`I?%&/m?{J?J??t??`$?@??????iG#)?*??eVe]f@????{???{???;?N'????\fdl??J??!????~|?"~?G?u]???'?)??G?????G??7N????W??{?????,??|?OR????q?K???!P Length: 4096 Offset: 0 Count: 11 Interestingly, Fiddler depicts a similar picture. Fiddler Request GET http://localhost/ArcGIS/rest/services/?f=json&pretty=true HTTP/1.1 Host: localhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.70 Safari/533.4 Referer: http://localhost/ArcGIS/rest/services Cache-Control: no-cache Pragma: no-cache Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: a=mWz_JFOusuGPnS3w5xx1BSUuyKGB3YZo92Dy2SUntP2MFWa8MaVq6a4I_IYBLKuefXDZANQMeqvxdGBgQoqTKz__V5EQLHwxmKlUNsaK7do. Fiddler Response - Before Clicking Decode HTTP/1.1 200 OK Content-Type: text/plain;charset=utf-8 Content-Encoding: gzip ETag: 719143506 Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Thu, 10 Jun 2010 01:08:43 GMT Content-Length: 179 ????????`I?%&/m?{J?J??t??`$?@??????iG#)?*??eVe]f@????{???{???;?N'????\fdl??J??!????~|?"~?G?u]???'?)??G?????G??7N????W??{?????,??|?OR????q?K???! P??? Fiddler Response - After Clicking Decode HTTP/1.1 200 OK Content-Type: text/plain;charset=utf-8 ETag: 719143506 Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Thu, 10 Jun 2010 01:08:43 GMT Content-Length: 80 {"currentVersion" : "10.0", "folders" : [], "services" : [ ] } I think that the problem may be a result of compression and/or chunking of data (this might be why I am receiving two calls to ServicesFilter.Write(...), however, I have not yet been able to solve the issue. How might I decode, unzip, and otherwise convert the byte stream into the string I know it should be for modification by my filter?

    Read the article

  • rsRenderingExtensionNotFound error using SSRS 2005

    - by Fermin
    Hi, I am trying to use the Sharp-shooter Silverlight report viewer control. This control has a server side component that renders the reports as XAML for displaying in the client side control. I have tried manually installing the rendering extension, according to the instructions on their website, and also the instructions on MSDN. I have basically copied the dll's into the reporting services bin folder and added the following line to rsreportserver.config file: <Extension Name="XAML" Type="PerpetuumSoft.ReportingService.XamlRendering.XamlRenderer,PerpetuumSoft.ReportingService.XamlRendering"/> I've also added a line into the rssrvpolicy.config file as specified on their site. After I made these changes I went into windows services and restarted the Reporting Services services but I get the following error when I try to run reports: rsRenderingExtensionNotFound: An attempt has been made to use a rendering extension that is not registered for this report server Is there anything else that I need to do to use a custom rendering extension within SSRS 2005?

    Read the article

  • Navigating through a sea of hype

    - by wouldLikeACrystalBall
    This is a vague, open question, so if you have no interest in these, please leave now. A few years ago it seemed everyone thought the death of desktop software was imminent. Web applications were the future. Everyone would move to cloud-based software-as-a-service systems, and developing applications for specific end-user platforms like Windows would soon become something of a ghetto. Joel's "How Microsoft Lost the API War" was but one of many such pieces sounding the death knell for this way of software development. Flash-forward to 2010, and the hype is all around mobile devices, particularly the iPhone. Software-as-a-Service vendors--even small ones such as YCombinator startups--go out of their way to build custom applications for the iPhone and other smart phone devices; applications that can be quite sophisticated, that run only on specific hardware and software architectures and are thus inherently incompatible. Now some of you are probably thinking, "Well, only the decline of desktop software was predicted; mobile devices aren't desktops." But the term was used by those predicting its demise to mean laptops also, and really any platform capable of running a browser. What was promised was a world where HTML and related standards would supplant native applications and their inherent difficulties. We would all code to the browser, not the OS. But here we are in 2010 with the AppStore bulging and development for the iPad just revving up. A few days ago, I saw someone on Hacker News claim that the future of computing was entirely in small, portable devices. Apparently the future is underpowered, requires dexterous thumbs and induces near-sightedness. How do those who so vehemently asserted one thing now assert the opposite with equal vehemence, without making even the slightest admission of error? And further, how are we as developers supposed to sift through all of this? I bought into the whole web-standards utopianism that was in vogue back in '06-'07 and now feel like it was a mistake. Is there some formula one can apply rather than a mere appeal to experience?

    Read the article

  • Make a service monitor and restart process

    - by Andrej
    Hi, i'm making an app that needs to be up and running at all times. (24/7) I'm not very expirienced with services, but I read on the internet that services can be made uncolasble by setting their "onclose" property to false. I have got the service monitoring my app, and the service can't be closed directly from the task manager services window... but, when I click "go to process" task manager leads me to the process the service spawned. From there I can close the process and instantly close the service. Since I don't have much expirience with services i'm wondering, is this behavior normal? If not, how to make the service unstopable?

    Read the article

  • NSURLErrorBadURL error

    - by Victor jiang
    My iphone app called Google Local Search(non javascript version) to behave some search business. Below is my code to form a url: NSString *url = [NSString stringWithFormat:@"http://ajax.googleapis.com/ajax/services/search/local?v=1.0&q=%@", keyword]; NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:url]]; [request setHTTPMethod:@"GET"]; //get response NSHTTPURLResponse* urlResponse = nil; NSError *error = [[[NSError alloc] init] autorelease]; NSData *responseData = [NSURLConnection sendSynchronousRequest:request returningResponse:&urlResponse error:&error]; NSString *result = [[NSString alloc] initWithData:responseData encoding:NSUTF8StringEncoding]; When the keyword refers to english characters, it works fine, but when refers to chinese characters(encoded in UTF8, such as '???' whose UTF8 code is 'e5a4a9 e5ae89 e997a8'), it will report NSURLErrorBadURL error(-1000, Returned when a URL is sufficiently malformed that a URL request cannot be initiated). Why? Then I carry out further investigation, I use Safari and type in the url below: http://ajax.googleapis.com/ajax/services/search/local?v=1.0&q=??? It also works, and the output I got from Macsniffer is: /ajax/services/search/local?v=1.0&q=%E5%A4%A9%E5%AE%89%E9%97%A8 So I write a testing url directly in my app NSString *url = [NSString stringWithFormat:@"http://ajax.googleapis.com/ajax/services/search/local?v=1.0&q=%E5%A4%A9%E5%AE%89%E9%97%A8"]; And what I got from the Macsniffer is some other thing: /ajax/services/search/local?v=1.0&q=1.687891E-28750X1.417C0001416CP-102640X1.4CC2D04648FBP-9999-1.989891E+0050X1.20DC00184CC67P-953E8E99A8 It seems my keyword "%E5%A4%A9%E5%AE%89%E9%97%A8" was translated into something else. So how can I form a valid url? I do need help!

    Read the article

  • Usng Rails ActiveRecord relationships

    - by Brian Goff
    I'm a newbie programmer, been doing shell scripting for years but have recently taken on OOP programming using Ruby and am creating a Rails application. I'm having a hard time getting my head wrapped around how to use my defined model relationships. I've tried searching Google, but all I can come up with are basically cheat sheets for what has_many, belongs_to, etc all mean. This stuff is easy to define & understand, especially since I've done a lot of work directly with SQL. What I don't understand is how to actually used those defined relationships. In my case I have 3 models: Locations Hosts Services Relationships (not actual code, just for shortening it): Services belongs_to :hosts Hosts has_many :services belongs_to :locations Locations has_many :hosts In this case I want to be able to display a column from Locations while working with Services. In SQL this is a simple join, but I want to do it the Rails/Ruby way, and also not use SQL in my code or redefine my joins.

    Read the article

  • Data view web part throwing error

    - by Ashutosh Singh
    Hi I'm using a xslt based dataview webpart the steps i have taken to create a data view webpart is that 1. added a list view webpart on the page 2. Modified the toolbar property to show fulll toolbar 3. open the web page containing above list view webpart in sharepoint desginer and converted it to xslt based webpart (to make further changes in UI) 4. saved the page and previewed in browser in browser web part was throwing the error while i was able to see it properly in desginer witout any error the error maesseged shown in webpart was: ** Unable to display this Web Part. To troubleshoot the problem, open this Web page in a Windows SharePoint Services-compatible HTML editor such as Microsoft Office SharePoint Designer. If the problem persists, contact your Web server administrator. ** the error message provided in sharepoint log file was : 05/12/2010 17:56:29.54 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Parts 89a1 Monitorable Error while executing web part: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.SharePoint.WebPartPages.DataFormWebPart.ResolveParameterValuesToXsl(ArgumentClassWrapper argList) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform() 05/12/2010 17:56:29.62 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Controls 88wy Medium SPDataSourceView.ExecuteSelect() - selectArguments: IsEmpty=True, MaximumRows=0, RetrieveTotalRowCount=False, SortExpression=, StartRowIndex=0, TotalRowCount=-1 05/12/2010 17:56:29.62 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Controls 88x2 Medium SPDataSourceView.ExecuteSelect() - formattedQuery = 1 05/12/2010 17:56:29.64 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Parts 89a1 Monitorable Error while executing web part: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.SharePoint.WebPartPages.DataFormWebPart.ResolveParameterValuesToXsl(ArgumentClassWrapper argList) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform()

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >