Search Results

Search found 699 results on 28 pages for 'n tier'.

Page 9/28 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Would anybody mind taking a look at my XML structure?

    - by Bill H
    I am relatively new to this but I was hoping somebody could offer up a good critique of this XML structure I put together. I am not looking for anything in depth but rather if somebody notices anything inherently wrong with the structure (or any tips to make it better) I'd greatly appreciate it. We have a large amount of products that we wholesale out and our customers were looking for a data feed to incorporate our products into their websites. <product modified=""> <id></id> <title></title> <description></description> <upc></upc> <quantity></quantity> <images> <image width="" height=""></image> <image width="" height=""></image> <image width="" height=""></image> </images> <category> <name></name> <subcategory></subcategory> </category> <sale expiration="">yes</sale> <msrp></msrp> <cube></cube> <weight></weight> <pricing> <tier> <pack><pack> <price></price> </tier> <tier> <pack><pack> <price></price> </tier> <tier> <pack><pack> <price></price> </tier> </pricing> </product> We sell in 3 different pack sizes hence the pricing node.

    Read the article

  • Partner Webcast – Oracle Coherence Applications on WebLogic 12c Grid - 21st Nov 2013

    - by Thanos Terentes Printzios
    Oracle Coherence is the industry leading in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data. As data volumes and customer expectations increase, driven by the “internet of things”, social, mobile, cloud and always-connected devices, so does the need to handle more data in real-time, offload over-burdened shared data services and provide availability guarantees. The latest release of Oracle Coherence 12c comes with great improvements in ease of use, integration and RASP (Reliability, Availability, Scalability, and Performance) areas. In addition it features an innovating approach to build and deploy Coherence Application as an integral part of typical JEE Enterprise Application. Coherence GAR archives and Coherence Managed Servers are now first-class citizens of all JEE applications and Oracle WebLogic domains respectively. That enables even easier development, deployment and management of complex multi-tier enterprise applications powered by data grid rich features. Oracle Coherence 12c makes your solution ready for the future of big data and always-on-line world. This webcast is focused on demonstrating How to create a Coherence Application using Oracle Enterprise Pack for Eclipse 12.1.2.1.1 (Kepler release). How to package the application in form of GAR archive inside the EAR deployable application. How to deploy the application to multi-tier WebLogic clusters. How to define and configure the WebLogic domain for the tiered clusters hosting both data grid and client JEE applications.  Finally we will expose the data in grid to external systems using REST services and create a simple web interface to the underlying data using Oracle ADF Faces components. Join us on this technology webcast, to find out more about how Oracle Cloud Application Frameworks brings together the key industry leading technologies of Oracle Coherence and Weblogic 12c, delivering next-generation applications. Agenda: Introduction to Oracle Coherence What's new in 12c release POF annotations Live Events Elastic Data (Flash storage support) Managed Coherence Servers for Oracle WebLogic Coherence Applications (Grid Archive) Live Demonstration Creating and configuring Coherence Servers forming the data tier cluster Creating a simple Coherence Grid Application in Eclipse Adding REST support and creating simple ADF Faces client application Deploying the grid and client applications to separate tiers in WebLogic topology HA capabilities of the data tier Summary - Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour REGISTER NOW For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Object oriented n-tier design. Am I abstracting too much? Or not enough?

    - by max
    Hi guys, I'm building my first enterprise grade solution (at least I'm attempting to make it enterprise grade). I'm trying to follow best practice design patterns but am starting to worry that I might be going too far with abstraction. I'm trying to build my asp.net webforms (in C#) app as an n-tier application. I've created a Data Access Layer using an XSD strongly-typed dataset that interfaces with a SQL server backend. I access the DAL through some Business Layer Objects that I've created on a 1:1 basis to the datatables in the dataset (eg, a UsersBLL class for the Users datatable in the dataset). I'm doing checks inside the BLL to make sure that data passed to DAL is following the business rules of the application. That's all well and good. Where I'm getting stuck though is the point at which I connect the BLL to the presentation layer. For example, my UsersBLL class deals mostly with whole datatables, as it's interfacing with the DAL. Should I now create a separate "User" (Singular) class that maps out the properties of a single user, rather than multiple users? This way I don't have to do any searching through datatables in the presentation layer, as I could use the properties created in the User class. Or would it be better to somehow try to handle this inside the UsersBLL? Sorry if this sounds a little complicated... Below is the code from the UsersBLL: using System; using System.Data; using PedChallenge.DAL.PedDataSetTableAdapters; [System.ComponentModel.DataObject] public class UsersBLL { private UsersTableAdapter _UsersAdapter = null; protected UsersTableAdapter Adapter { get { if (_UsersAdapter == null) _UsersAdapter = new UsersTableAdapter(); return _UsersAdapter; } } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, true)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsers() { return Adapter.GetUsers(); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUserByUserID(int userID) { return Adapter.GetUserByUserID(userID); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsersByTeamID(int teamID) { return Adapter.GetUsersByTeamID(teamID); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsersByEmail(string Email) { return Adapter.GetUserByEmail(Email); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Insert, true)] public bool AddUser(int? teamID, string FirstName, string LastName, string Email, string Role, int LocationID) { // Create a new UsersRow instance PedChallenge.DAL.PedDataSet.UsersDataTable Users = new PedChallenge.DAL.PedDataSet.UsersDataTable(); PedChallenge.DAL.PedDataSet.UsersRow user = Users.NewUsersRow(); if (UserExists(Users, Email) == true) return false; if (teamID == null) user.SetTeamIDNull(); else user.TeamID = teamID.Value; user.FirstName = FirstName; user.LastName = LastName; user.Email = Email; user.Role = Role; user.LocationID = LocationID; // Add the new user Users.AddUsersRow(user); int rowsAffected = Adapter.Update(Users); // Return true if precisely one row was inserted, // otherwise false return rowsAffected == 1; } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Update, true)] public bool UpdateUser(int userID, int? teamID, string FirstName, string LastName, string Email, string Role, int LocationID) { PedChallenge.DAL.PedDataSet.UsersDataTable Users = Adapter.GetUserByUserID(userID); if (Users.Count == 0) // no matching record found, return false return false; PedChallenge.DAL.PedDataSet.UsersRow user = Users[0]; if (teamID == null) user.SetTeamIDNull(); else user.TeamID = teamID.Value; user.FirstName = FirstName; user.LastName = LastName; user.Email = Email; user.Role = Role; user.LocationID = LocationID; // Update the product record int rowsAffected = Adapter.Update(user); // Return true if precisely one row was updated, // otherwise false return rowsAffected == 1; } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Delete, true)] public bool DeleteUser(int userID) { int rowsAffected = Adapter.Delete(userID); // Return true if precisely one row was deleted, // otherwise false return rowsAffected == 1; } private bool UserExists(PedChallenge.DAL.PedDataSet.UsersDataTable users, string email) { // Check if user email already exists foreach (PedChallenge.DAL.PedDataSet.UsersRow userRow in users) { if (userRow.Email == email) return true; } return false; } } Some guidance in the right direction would be greatly appreciated!! Thanks all! Max

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Hello With Oracle Identity Manager Architecture

    - by mustafakaya
    Hi, my name is Mustafa! I'm a Senior Consultant in Fusion Middleware Team and living in Istanbul,Turkey. I worked many various Java based software development projects such as end-to-end web applications, CRM , Telco VAS and integration projects.I want to share my experiences and research about Fusion Middleware Products in this column. Customer always wants best solution from software consultants or developers. Solution will be a code snippet or change complete architecture. We faced different requests according to the case of customer. In my posts i want to discuss Fusion Middleware Products Architecture or how can extend usability with apis or UI customization and more and I look forward to engaging with you on your experiences and thoughts on this.  In my first post, i will be discussing Oracle Identity Manager architecture  and i plan to discuss Oracle Identity Manager 11g features in next posts. Oracle Identity Manager System Architecture Oracle Identity Governance includes Oracle Identity Manager,Oracle Identity Analytics and Oracle Privileged Account Manager. I will discuss Oracle Identity Manager architecture in this post.  In basically, Oracle Identity Manager is a n-tier standard  Java EE application that is deployed on Oracle WebLogic Server and uses  a database .  Oracle Identity Manager presentation tier has three different screen and two different client. Identity Self Service and Identity System Administration are web-based thin client. Design Console is a Java Swing Client that communicates directly with the Business Service Tier.  Identity Self Service provides end-user operations and delegated administration features. System Administration provides system administration functions. And Design Console mostly use for development management operations such as  create and manage adapter and process form,notification , workflow desing, reconciliation rules etc. Business service tier is implemented as an Enterprise JavaBeans(EJB) application. So you can extense Oracle Identity Manager capabilities.  -The SMPL and EJB APIs allow develop custom plug-ins such as management roles or identities.  -Identity Services allow use core business capabilites of Oracle Identity Manager such as The User provisioning or reconciliation service. -Integration Services allow develop custom connectors or adapters for various deployment needs. -Platform Services allow use Entitlement Servers, Scheduler or SOA composites. The Middleware tier allows you using capabilites ADF Faces,SOA Suites, Scheduler, Entitlement Server and BI Publisher Reports. So OIM allows you to configure workflows uses Oracle SOA Suite or define authorization policies use with Oracle Entitlement Server. Also you can customization of OIM UI without need to write code and using ADF Business Editor  you can extend custom attributes to user,role,catalog and other objects. Data tiers; Oracle Identity Manager is driven by data and metadata which provides flexibility and adaptability to Oracle Identity Manager functionlities.  -Database has five schemas these are OIM,SOA,MDS,OPSS and OES. Oracle Identity Manager uses database to store runtime and configuration data. And all of entity, transactional and audit datas are stored in database. -Metadata Store; customizations and personalizations are stored in file-based repository or database-based repository.And Oracle Identity Manager architecture,the metadata is in Oracle Identity Manager database to take advantage of some of the advanced performance and availability features that this mode provides. -Identity Store; Oracle Identity Manager provides the ability to integrate an LDAP-based identity store into Oracle Identity Manager architecture.  Oracle Identity Manager uses the human workflow module of Oracle Service Oriented Architecture Suite. OIM connects to SOA using the T3 URL which is front-end URL for the SOA server.Oracle Identity Manager uses embedded Oracle Entitlement Server for authorization checks in OIM engine.  Several Oracle Identity Manager modules use JMS queues. Each queue is processed by a separate Message Driven Bean (MDB), which is also part of the Oracle Identity Manager application. Message producers are also part of the Oracle Identity Manager application. Oracle Identity Manager uses a scheduled jobs for some activities in the background.Some of scheduled jobs come with Out-Of-Box such as the disable users after the end date of the users or you can define your custom schedule jobs with Oracle Identity Manager APIs. You can use Oracle BI Publisher for reporting Oracle Identity Manager transactions or audit data which are in database. About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • Sea Monkey Sales & Marketing, and what does that have to do with ERP?

    - by user709270
    Tier One Defined By Lyle Ekdahl, Oracle JD Edwards Group Vice President and General Manager  I recently became aware of the latest Sea Monkey Sales & Marketing tactic. Wait now, what is Sea Monkey Sales & Marketing and what does that have to do with ERP? Well if you grew up in USA during the 50’s, 60’s and maybe a bit in the early 70’s there was a unifying media of culture known as the comic book. I was a big Iron Man fan. I always liked the troubled hero aspect of Tony Start and hey he was a technologist. This is going somewhere, just hold on. Of course comic books like most media contained advertisements. Ninety pound weakling transformed by Charles Atlas in just 15 minutes per day. Baby Ruth, Juicy Fruit Gum and all assortments of Hostess goodies were on display. The best ad was for the “Amazing Live Sea-Monkeys – The real live fun-pets you grow yourself!” These ads set the standard for exaggeration and half-truth; “…they love attention…so eager to please, they can even be trained…” The cartoon picture on the ad is of a family of royal looking sea creatures – daddy, mommy, son and little sis – sea monkey? There was a disclaimer at the bottom in fine print, “Caricatures shown not intended to depict Artemia.” Ok what ten years old knows what the heck artemia is? Well you grow up fast once you’ve been separated from your buck twenty five plus postage just to discover that it is brine shrimp. Really dumb brine shrimp that don’t take commands or do tricks. Unfortunately the technology industry is full of sea monkey sales and marketing. Yes believe it or not in some cases there is subterfuge and obfuscation used to secure contracts. Hey I get it; the picture on the box might not be the actual size. Make up what you want about your product, but here is what I don’t like, could you leave out the obvious falsity when it comes to my product, especially the negative stuff. So here is the latest one – “Oracle’s JD Edwards is NOT tier one”. Really? Definition please! Well a whole host of googleable and reputable sources confirm that a tier one vendor is large, well known, and enjoys national and international recognition. Let me see large, so thousands of customers? Oh and part of the world’s largest business software and hardware corporation? Check and check JD Edwards has that and that. Well known, enjoying national and international recognition? Oracle’s JD Edwards EnterpriseOne is available in 21 languages and is directly localized in 33 countries that support some of the world’s largest multinationals and many midsized domestic market companies. Something on the order of half the JD Edwards customer base is outside North America. My passport is on its third insert after 2 years and not from vacations. So if you don’t mind I am going to mark national and international recognition in the got it column. So what else is there? Well let me offer a few criteria. Longevity – The JD Edwards products benefit from 35+ years of intellectual property development; through booms, busts, mergers and acquisitions, we are still here Vision & innovation – JD Edwards is the first full suite ERP to run on the iPad as just one example Proven track record of execution – Since becoming part of Oracle, JD Edwards has released to the market over 20 deliverables including major release, point releases, new apps modules, tool releases, integrations…. Solid, focused functionality with a flexible, interoperable, extensible underlying architecture – JD Edwards offers solid core ERP with specialty modules for verticals all delivered on a well defined independent tools layer that helps enable you to scale your business without an ERP reimplementation A continuation plan – Oracle’s JD Edwards offers our customers a 6 year roadmap as well as interoperability with Oracle’s next generation of applications Oh I almost forgot that the expert sources agree on one additional thing, tier one may be a preferred vendor that offers product and services to you with appealing value. You should check out the TCO studies of JD Edwards. I think you will see what the thousands of customers that rely on these products to run their businesses enjoy – that is the tier one solution with the lowest TCO. Oh and if you get an offer to buy an ERP for no license charge, remember the picture on the box might not be the actual size. 

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • New version of SQL Server Data Tools is now available

    - by jamiet
    If you don’t follow the SQL Server Data Tools (SSDT) blog then you may not know that two days ago an updated version of SSDT was released (and by SSDT I mean the database projects, not the SSIS/SSRS/SSAS stuff) along with a new version of the SSDT Power Tools. This release incorporates a an updated version of the SQL Server Data Tier Application Framework (aka DAC Framework, aka DacFX) which you can read about on Adam Mahood’s blog post SQL Server Data-Tier Application Framework (September 2012) Available. DacFX is essentially all the gubbins that you need to extract and publish .dacpacs and according to Adam’s post it incorporates a new feature that I think is very interesting indeed: Extract DACPAC with data – Creates a database snapshot file (.dacpac) from a live SQL Server or Windows Azure SQL Database that contains data from user tables in addition to the database schema. These packages can be published to a new or existing SQL Server or Windows Azure SQL Database using the SqlPackage.exe Publish action. Data contained in package replaces the existing data in the target database. In short, .dacpacs can now include data as well as schema. I’m very excited about this because one of my long-standing complaints about SSDT (and its many forebears) is that whilst it has great support for declarative development of schema it does not provide anything similar for data – if you want to deploy data from your SSDT projects then you have to write Post-Deployment MERGE scripts. This new feature for .dacpacs does not change that situation yet however it is a very important pre-requisite so I am hoping that a feature to provide declaration of data (in addition to declaration of schema which we have today) is going to light up in SSDT in the not too distant future. Read more about the latest SSDT, Power Tools & DacFX releases at: Now available: SQL Server Data Tools - September 2012 update! by Janet Yeilding New SSDT Power Tools! Now for both Visual Studio 2010 and Visual Studio 2012 by Sarah McDevitt SQL Server Data-Tier Application Framework (September 2012) Available by Adam Mahood @Jamiet

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Azure Search Preview

    - by Greg Low
    One of the things I’ve been keeping an eye on for quite a while now is the development of the Azure Search system. While it’s not a full replacement for the full-text indexing service in SQL Server on-premises as yet, it’s a really, really good start. Liam Cavanagh, Pablo Castro and the team have done a great job bringing this to the preview stage and I suspect it could be quite popular. I was very impressed by how they incorporated quite a bit of feedback I gave them early on, and I’m sure that others involved would have felt the same. There are two tiers at present. One is a free tier and has shared resources; the other is currently $125/month and has reserved resources. I would like to see another tier between these two, much the same way that Azure websites work. If you have any feedback on this, now would be a good time to make it known. In the meantime, given there is a free tier, there’s no excuse to not get out and try it. You’ll find details of it here: http://azure.microsoft.com/en-us/documentation/services/search/ I’ll be posting more info about this service, and showing examples of it during the upcoming months.

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Figure: Create a new folder Give the local TFS WPG group full control of the directory   Figure: You need to use the App Tier Service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). Figure: The web.config for TFS is stored in the application folder <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> Figure: Adding this to the web.config will trigger a restart of the app pool Figure: Your web.config should look something like this The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Give the local TFS WPG group full control of the directory Figure: You need to use the App Tier service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • Assuming "clean code/architecture" is there a difference in "effort" between PHP or Java/J2EE web application development?

    - by PhD
    A client asked us to estimate effort when selecting PHP as the implementation language for his next web-based application. We spent about a week exploring PHP, prototyping, testing etc., We are quite new to this language - may have hacked around it in the past but, let's go with PHP-noobs but application development experts (for the lack of a better, less flattering word :) It seems, that if we write, clean maintainable code, follow separation of concerns, enterprise architecture patters (DAOs etc.) the 'effort' in creating an object-oriented PHP based web-application seems to be the same for a Java based one. Here's our equation for estimating the effort (development/delivery time): ConstructionEffort = f(analysis, design, coding, testing, review, deployment) We were specifically comparing effort estimates in creating an enterprise application with the following: PHP + CakePHP/CodeIgniter (should we have considered others?) Java + Spring + Restlet It's an end-to-end application: Client: Javascript/jQuery + HTML/CSS Middle tier/Business Logic - (Still evaluating PHP/Java) Database: MySQL The effort estimates of the 1st and 3rd tier are constant and relatively independent of the middle tier's technology. At a high level with an initial breakdown into user stories of the requested features as well as a high-level SWAG on the sheer number of classes/SLOC that would be required for PHP doesn't seem to differ by much from what is required of the same in Java. Is this correct? We are basing our initial estimates on the initial prototyping/coding we've done with PHP - we are currently disregarding fluency with the language as a factor, since that'll be an initial hurdle and not a long term impediment IMHO (we also have sufficient time to become quite fluent with PHP). I'm interested in knowing the programmers' perspective with respect to effort when creating similar applications with either of the languages to justify choosing one over the other. Are we missing something here? It seems we are going against popular belief of PHP being quicker to market (or we being very fluent with Java have our vision clouded). It doesn't seem to have any coding/programming effort saving from what we/ve played around with.

    Read the article

  • Performing an upgrade from TFS 2008 to TFS 2010

    - by Enrique Lima
    I recently had to go through the process of migrating a TFS 2008 SP1 to a TFS 2010 environment. I will go into the details of the tasks that I went through, but first I want to explain why I define it as a migration and not an upgrade. When this environment was setup, based on support and limitations for TFS 2008, we used a 32 bit platform for the TFS Application Tier and Build Servers.  The Data Tier, since we were installing SP1 for TFS 2008, was done as a 64 bit installation.  We knew at that point that TFS 2010 was in the picture so that served as further motivation to make that a 64bit install of SQL Server.  The SQL Server at that point was a single instance (Default) installation too.  We had a pretty good strategy in place for backups of the databases supporting the environment (and this made the migration so much smoother), so we were pretty familiar with the databases and the purpose they serve. I am sure many of you that have gone through a TFS 2008 installation have encountered challenges and trials.  And likely even more so if you, like me, needed to configure your deployment for SSL.  So, frankly I was a little concerned about the process of migrating.  They say practice makes perfect, and this environment I worked on is in some way my brain child, so I was not ready nor willing for this to be a failure or something that would impact my client’s work. Prior to going through the migration process, we did the install of the environment.  The Data Tier was the same, with a new Named instance in place to host the 2010 install.  The Application Tier was in place too, and we did the DefaultCollection configuration to test and validate all components were in place as they should. Anyway, on to the tasks for the migration (thanks to Martin Hinselwood for his very thorough documentation): Close access to TFS 2008, you want to make sure all code is checked in and ready to go.  We stated a difference of 8 hours between code lock and the start of migration to give time for any unexpected delay.  How do we close access?  Stop IIS. Backup your databases.  Which ones? TfsActivityLogging TfsBuild TfsIntegration TfsVersionControl TfsWorkItemTracking TfsWorkItemTrackingAttachments Restore the databases to the new Named Instance (make sure you keep the same names) Now comes the fun part! The actual import/migration of the databases.  A couple of things happen here. The TfsIntegration database will be scanned, the other databases will be checked to validate they exist.  Those databases will go through a process of data being extracted and transferred to the TfsVersionControl database to then be renamed to Tfs_<Collection>. You will be using a tool called tfsconfig and the option import. This tool is located in the TFS 2010 installation path (C:\Program Files\Microsoft Team Foundation Server 2010\Tools),  the command to use is as follows:    tfsconfig import /sqlinstance:<instance> /collectionName:<name> /confirmed Where <instance> is going to be the SQL Server instance where you restored the databases to.  <name> is the name you will give the collection. And to explain /confirmed, well this means you have done a backup of the databases, why?  well remember you are going to merge the databases you restored when you execute the tfsconfig import command. The process will go through about 200 tasks, once it completes go to Team Foundation Server Administration Console and validate your imported databases and contents. We’ll keep this manageable, so the next post is about how to complete that implementation with the SSL configuration.

    Read the article

  • Unit Testing in the real world

    - by Malfist
    I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances. When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing. However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions. How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?

    Read the article

  • Custom Solr sorting

    - by Tom
    Hello everyone, I've been asked to do an evaluation of Solr as an alternative for a commercial search engine. The application now has a very particular way of sorting results using something called "buckets". I'll try to explain with a bit of details: In the interface they have 2 fields: "what" and "where". Both fields are actually sets of fields (what = category, name, contact info... and where= country, state, region, city...) so the copyfield feature of Solr immediately comes to mind. Now based on the field generated the actual match the result should end up in a specific bucket. In particular the first bucket contains all the result documents that have an exact match on the category field, in the second bucket all exact matches on name, the third partial matches on category, the fourth partial matches on name, the fifth matches on contact info etc... Then within each of those first tier buckets all results are placed in second tier buckets depending on what location was matched: city, then region, then province and so on. To even complicate things more there is also a third tier bucket where results are placed according to the value of a ranking field: all documents with the value 1 in the ranking field go in bucket 1 and so on. And finally results should be randomized in the third tier bucket... On top of this they obviously want support for facets and paging. My apologies for the long mail but I would greatly appreciate feedback and/or suggestions. I'm aware that this that this is a very particular problem but everything that points me in the right direction is helpful. Cheers, Tom

    Read the article

  • Need help troubleshooting highly variable ping times

    - by Elliot.Bradshaw
    I'm at work using Citrix (think Remote Desktop) to connect to client sites. With my job I have to write a fair bit of code while I'm connected remotely via Citrix, so the latency of my internet connection is important. If I'm getting ping times above 250ms, then it becomes almost impossible to scroll, click or type with accuracy. Recently my Comcast business internet has been exhibiting highly variable ping times. If I ping google.com, I'll get pings that range from 9ms all the way up to 1300ms. The problem seems to be at its worst during the hours of 1PM to 4:30PM. Outside of those hours and the variance in pings settles down, mostly between 9ms and 50ms. The signal to noise ratio and upstream power are both fine on my modem--the values are here: http://pastebin.com/D4hWGPXf I ran a trace route from my computer to google.com (the results of which are here: http://pastebin.com/GcdjYvMh) and did another test ping to the IP of the first hop outside of our local network (73.98.44.1)--the variance in ping times existed in exactly the same manner as if I were pinging Google. Connecting directly to the cable modem by CAT5 makes no difference. Here is a screenshot demonstrating the variance of the ping times: http://postimage.org/image/haocdeauv/full/ -- as you can see it can get pretty bad. Three Comcast techs have been out (two of them were here when the problem wasn't happening) and they as well as the regional tier 2 Comcast support were unable to diagnose the problem. I now have a ticket open with tier 3 support, but have yet to hear back from them. Does anyone know what could cause these sorts of problems or have any idea from the traceroute above where it could be originating? The regional tier 2 guy tried to tell me that what I'm seeing is normal--are highly variable ping times like that ever acceptable? Anything I should ask Comcast to do or look at to get this problem fixed? Any tips/advice much appreciated! Edit: This is Comcast cable internet at a small start-up, we've ruled out congestion in our private LAN as a cause (i.e., no one's watching YouTube when the pings become variable). Update: Tier 3 Comcast support advised swapping out the modem, a tech came here today and did that--same problem persists.

    Read the article

  • Is System.Windows.Media.RenderCapability the wrong source to detect the current render-mode

    - by happyclicker
    I use System.Windows.Media.RenderCapability.Tier to show the current render mode within a diagnostics panel of my app. If I force the app to change the render-mode through the following code HwndSource hwndSource = PresentationSource.FromVisual(visual) as System.Windows.Interop.HwndSource; HwndTarget hwndTarget = hwndSource.CompositionTarget; hwndTarget.RenderMode = renderMode; neither System.Windows.Media.RenderCapability.TierChanged fires, nor has the System.Windows.Media.RenderCapability.Tier property changed. However the changes are applied to the app. If I look with Perforator, the render mode has been changed to the desired mode. Although I’ve found at many locations that System.Windows.Media.RenderCapability.Tier can be used to detect the current render state (also msdn, see this), it seems, System.Windows.Media.RenderCapability only gives information about the capabilities and not about the current mode. That makes also sense if I look at the name of the class. Is there another source to know how an actual wpf-content is rendered or am I doing something wrong?

    Read the article

  • Zend Framework 2 without MVC

    - by Luke
    I'm currently using Zend Framework 2 for all my web tier development using the MVC module that's shipped with the framework. However, I want to implement my business logic in a separate layer, call it the business tier which is a non HTTP layer and expose it through AMQP, and I'd like to reuse my knowledge of PHP for implementing this. Since there is a lot of "stuff" that I need in this business layer such as configuration, a service manager, database access, etc, etc, I'd like to use all the goodies shipped with Zend Framework 2 for this. Are there any examples or tutorials out there on how to build a Zend Framework 2 application that is not build for the web tier and doesn't require the MVC module?

    Read the article

  • links for 2011-03-15

    - by Bob Rhubart
    Dr. Frank Munz: Resize AWS EC2 Cloud Instances Dr Munz says: "You cannot dynamically resize a running cloud instance. E.g. there is no API call to ask for 2.2 GHz CPU speed instead of 1.8 GHz or to dynamically add another 3.5 GB of RAM." (tags: oracle cloud amazon ec2) Roddy Rodstein: Oracle VM Manager Architecture and Scalability Rodstein says: "Oracle VM Manager can be installed in an all-in-one configuration using the default Oracle 10g Express Database or in a more traditional two tier architecture with an OC4J web tier and a 10 or 11g database tier." (tags: oracle otn virtualization oraclevm) Mark Nelson: Getting started with Continuous Integration for SOA projects Nelson says: "I am exploring how to use Maven and Hudson to create a continuous integration capability for SOA and BPM projects. This will be the first post of several on this topic, and today we will look at setting up some simple continuous integration for a single SOA project." (tags: oracle maven hudson soa bpm) 5 New Java Champions (The Java Source) Tori Wieldt shares the big news. Congratulations to new Java Champs Jonas Bonér, James Strachan, Rickard Oberg, Régina ten Bruggencate, and Clara Ko. (tags: oracle java) Alert for Forms customers running Oracle Forms 10g (Grant Ronald's Blog) Ronald says: "While you might have been happily running your Forms 10g applications for about 5 years or so now, the end of premier support is creeping up and you need to start planning for a move to Oracle Forms 11g." (tags: oracle oracleforms) Brenda Michelson: Enterprise Architecture Rant #4,892 "I’m increasingly concerned about the macro-direction of our field, as we continue to suffer ivory tower enterprise architecture punditry, rigid frameworks and endless philosophical waxing." - Brenda Michelson (tags: entarch enterprisearchitecture ivorytower) Amitabh Apte: Enterprise Architecture - Different Perspectives "Business does not need Enterprise Architecture," says Apte, "it needs value and outcomes from the EA function." (tags: entarch enterprisearchitecture) First Ever MySQL on Windows Online Forum - March 16, 2011 (Oracle's MySQL Blog) Monica Kumar shares the details. (tags: oracle mysql mswindows) Jeff Davies: Running Multiple WebLogic and OSB Domains "There is a small 'gotcha' if you want to create multiple domains on a devevelopment machine," says Jeff Davies. But don't worry - there's a solution. (tags: oracle soa osb weblogic servicebus) The Arup Nanda Blog: Good Engineering "Engineering is not about being superficially creative," Nanda says, "it's about reliability and trustworthiness." (tags: oracle engineering software technology) Welcome to the SOA & E2.0 Partner Community Forum (SOA Partner Community Blog) (tags: ping.fm)

    Read the article

  • Webcast Replay Available: Technical Preview of EBS 12.2 Online Patching

    - by BillSawyer
    I am pleased to release the replay and presentation for ATG Live Webcast: Technical Preview of EBS 12.2 Online Patching (Presentation) Kevin Hudson, Senior Director and one of the Online Patching architects, discussed one of the cornerstone new features in our upcoming Oracle E-Business Suite 12.2 release. This ground-breaking feature is based upon Edition-Based Redefinition, a new 11gR2 Database feature that was built to Oracle Applications division specifications to allow the E-Business Suite's database tier to be patched while the environment is running.  Online Patching combines the use of Edition-Based Redefinition and new E-Business Suite technologies to allow patching to the E-Business Suite's database and application tier servers while the environment is being actively used by its end-users. (June 2012) Finding other recorded ATG webcastsThe catalog of ATG Live Webcast replays, presentations, and all ATG training materials is available in this blog's Webcasts and Training section.

    Read the article

  • Oracle Database Appliance Now Certified by SAP

    - by Bandari Huang
    All SAP products based on SAP NetWeaver 7.x that are also certified for Oracle Database 11g Release 2 (single node or RAC) can now be used with the Oracle Database Appliance. RAC One Node is NOT supported. Only Three-Tier SAP Installations. Only the Oracle database can run on the Oracle Database Appliance.  No SAP instance can be deployed on the Oracle Database Appliance. SAP instances have to run on different middle-tier machines of any hardware architecture and operating system. Central Services (ASCS and/or SCS) can be configured to run on the Oracle Database Appliance for Unicode installations of SAP. SAP BR*Tools support is now available for the Oracle Database Appliance. For more information about SAP on ODA, please refer: Using SAP NetWeaver with the Oracle Database Appliance New Nov2012 Note 1760737 - SAP Software and Oracle Database Appliance (ODA) Note 1785353 - ODA 11.2.0: Patches for 11.2.0.3  

    Read the article

  • Advantages of Client/Server Architecture over Mainframe Architecture

    Originally mainframe architectures relied on a centralized host server that processed data and returned it to be displayed on a dummy terminal. These dummy terminals did not have my processing power and could only display data that was sent from the mainframe. Application architecture completely changed with the advent of N-Tier architecture. The N-Tier architecture replaced the dummy terminals with standard PCs that could think and/or process for themselves. This allowed for applications to be decentralized. Further, this type of architecture also breaks up the roles found within a mainframe by extracting Web Interfaces, Application Logic and Data access in to 3 separate parts so that it can be extended and distributed as the demands of an application increases.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >