Search Results

Search found 16764 results on 671 pages for 'provider model'.

Page 172/671 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • SQL SERVER – List of All the Samples Database Available to Download for FREE

    - by Pinal Dave
    It is pretty much very common to have a sample database for any database product. Different companies keep on improving their product and keep on coming up with innovation in their product. To demonstrate the capability of their new enhancements they need the sample database. Microsoft have various sample database available for free download for their SQL Server Product. I have collected them here in a single blog post. Download an AdventureWorks Database The AdventureWorks OLTP database supports standard online transaction processing scenarios for a fictitious bicycle manufacturer (Adventure Works Cycles). Scenarios include Manufacturing, Sales, Purchasing, Product Management, Contact Management, and Human Resources. Coconut Dal Coconut Dal is a lightweight data access layer, for use in projects where the Entity Framework cannot be used or Microsoft’s Enterprise Library Data Block is unsuitable. Anyone who is handwriting ADO.NET should use a library instead and Coconut Dal might be the answer.  DataBooster – Extension to ADO.NET Data Provider The dbParallel DataBooster library is a high-performance extension to ADO.NET Data Provider, includes two aspects: 1) A slimmed down API encapsulation which simplified the most common data access operations (DbConnection -> DbCommand -> DbParameter -> DbDataReader) into a single class DbAccess, to help application with a clean DAL, avoid over-packing and redundant-copy of data transfer. 2) A booster for writing mass data onto database. Base on a rational utilization of database concurrency and a effective utilization of network bandwidth. Tabular AMO 2012 The sample is made of two project parts. The first part is a library of functions to manage tabular models -AMO2Tabular V2-. The second part is a sample to build a tabular model -AdventureWorks Tabular AMO 2012- using the AMO2Tabular library; the created model is similar to the ‘AdventureWorks Tabular Model 2012. SQL Server Analysis Services Product Samples SQL Server Analysis Services provides, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, Key Performance Indicator (KPI) scorecards, and data mining. Analysis Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Reporting Services Product Samples This project contains Reporting Services samples released with Microsoft SQL Server product. These samples are in the following five categories: Application Samples, Extension Samples, Model Samples, Report Samples, and Script Samples. If you are interested in contributing Reporting Services samples, please let us know by posting in the developers’ forum. Reporting Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008 R2 PCU1. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Integration Services Product Samples This project contains Integration Services samples released with Microsoft SQL Server product. These samples are in the following two categories: Package Samples and Programming Samples. If you are interested in contributing Integration Services samples, please let us know by posting in the developers’ forum. Integration Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. Windows Azure SQL Reporting Admin Sample The SQLReportingAdmin sample for Windows Azure SQL Reporting demonstrates the usage of SQL Reporting APIs, and manages (add/update/delete) permissions of SQL Reporting users. Windows Azure SQL Reporting ReportViewer-SOAP API usage sample These sample projects demonstrate how to embed a Microsoft ReportViewer control that points to reports hosted on SQL Reporting report servers and how to use SQL Reporting SOAP APIs in your Windows Azure Web application. Enterprise Library 5.0 – Integration Pack for Windows Azure This NuGet package contains a zip file with the source code for the Enterprise Library Integration Pack for Windows Azure.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Sample Database

    Read the article

  • Knockout.js - Filtering, Sorting, and Paging

    - by jtimperley
    Originally posted on: http://geekswithblogs.net/jtimperley/archive/2013/07/28/knockout.js---filtering-sorting-and-paging.aspxKnockout.js is fantastic! Maybe I missed it but it appears to be missing flexible filtering, sorting, and pagination of its grids. This is a summary of my attempt at creating this functionality which has been working out amazingly well for my purposes. Before you continue, this post is not intended to teach you the basics of Knockout. They have already created a fantastic tutorial for this purpose. You'd be wise to review this before you continue. http://learn.knockoutjs.com/ Please view the full source code and functional example on jsFiddle. Below you will find a brief explanation of some of the components. http://jsfiddle.net/JTimperley/pyCTN/13/ First we need to create a model to represent our records. This model is a simple container with defined and guaranteed members. function CustomerModel(data) { if (!data) { data = {}; } var self = this; self.id = data.id; self.name = data.name; self.status = data.status; } Next we need a model to represent the page as a whole with an array of the previously defined records. I have intentionally overlooked the filtering and sorting options for now. Note how the filtering, sorting, and pagination are chained together to accomplish all three goals. This strategy allows each of these pieces to be used selectively based on the page's needs. If you only need sorting, just sort, etc. function CustomerPageModel(data) { if (!data) { data = {}; } var self = this; self.customers = ExtractModels(self, data.customers, CustomerModel); var filters = […]; var sortOptions = […]; self.filter = new FilterModel(filters, self.customers); self.sorter = new SorterModel(sortOptions, self.filter.filteredRecords); self.pager = new PagerModel(self.sorter.orderedRecords); } The code currently supports text box and drop down filters. Text box filters require defining the current 'Value' and the 'RecordValue' function to retrieve the filterable value from the provided record. Drop downs allow defining all possible values, the current option, and the 'RecordValue' as before. Once defining these filters, they are automatically added to the screen and any changes to their values will automatically update the results, causing their sort and pagination to be re-evaluated. var filters = [ { Type: "text", Name: "Name", Value: ko.observable(""), RecordValue: function(record) { return record.name; } }, { Type: "select", Name: "Status", Options: [ GetOption("All", "All", null), GetOption("New", "New", true), GetOption("Recently Modified", "Recently Modified", false) ], CurrentOption: ko.observable(), RecordValue: function(record) { return record.status; } } ]; Sort options are more simplistic and are also automatically added to the screen. Simply provide each option's name and value for the sort drop down as well as function to allow defining how the records are compared. This mechanism can easily be adapted for using table headers as the sort triggers. That strategy hasn't crossed my functionality needs at this point. var sortOptions = [ { Name: "Name", Value: "Name", Sort: function(left, right) { return CompareCaseInsensitive(left.name, right.name); } } ]; Paging options are completely contained by the pager model. Because we will be chaining arrays between our filtering, sorting, and pagination models, the following utility method is used to prevent errors when handing an observable array to another observable array. function GetObservableArray(array) { if (typeof(array) == 'function') { return array; }   return ko.observableArray(array); }

    Read the article

  • Oracle Fusion CRM Implementation Bootcamp for EMEA Systems Integrators - Paris July 24-26th

    - by Richard Lefebvre
    To support partner success and increase win potential with Fusion CRM, we are organizing a unique bootcamp on Fusion CRM intended for Oracle EMEA partners on July 24th to 26th. Join us for this outstanding Bootcamp and learn from Oracle Corporation in-depth know-how on Fusion CRM. The official announcement will be forthcoming, yet we wanted you to determine the appropriate candidate to attend this workshop. Further to this we will send the actual invitation to the selected candidate. Due to the limited number of seats, we will be limiting the number of registrations per SI company and will be selecting the participants. If you are interested to have one or more representatives of your company to attend this bootcamp, please send an email to [email protected] by June 18th indicating the name and email address of the participants you would like to nominate, ranked by priority. What will we cover: This Bootcamp presents the fundamental concepts of the Oracle Fusion CRM applications. It introduces you to each functional area of the product, how it is used, and what you need to consider when implementing it for an organization. While we do examine implementation considerations, we do not address the detailed steps of implementation. Instead, we direct you to the relevant resources to learn more. Topics covered: Fusion CRM Introduction Fusion CRM Security Introduction Fusion Functional Setup Manager Introduction Customer Model Introduction Customer Center Introduction Customer Data Management Introduction Marketing & Campaigns Introduction Lead Management Introduction Territory Management Introduction Territory Modeling Introduction with Exercise Opportunity Management Introduction Forecasting Introduction Analytics Introduction CRM For Microsoft Outlook Introduction Customizing with Composers Introduction Roundtable Discussions, and time for hands-on labs (day 2, 3, 4) Next Steps, available resources, ongoing learning path, partner environments, keeping in touch and feedback Bootcamp Goals: Enable a new Fusion CRM implementation team member to: Describe the scope of Oracle Fusion CRM applications Describe the basic security model Describe the customer model Perform common sales and marketing user transactions Access and navigate the Functional Setup Manager Model territories in Fusion CRM using sample business requirements Do necessary planning before implementing the offerings and options Describe the analytics available with the Fusion CRM product Describe the basic page customizations that can be done to meet business requirements Find documentation and other courses to assist in performing setup tasks Expectations: This Bootcamp program should prime the SI organization implementation consultants to attain the basic skills necessary to support a consulting practice in the delivery, scoping, pricing, and planning of your Fusion CRM Implementations. Oracle University will begin to offer additional deep skill training, starting this summer, designed to follow the Introduction Bootcamp. Participants will be expected to participate in labs, exercises, workshops and roundtable discussions with the Oracle Product Managers. Who should attend: This class is designed for your lead CRM Implementation consultants, those who will support your Fusion CRM consulting practice as it grows. These individuals may be members of a centre of excellence, or skills leadership office. The individual who is attending the bootcamp must have prior experience implementing a CRM solution. Intended Audience: Oracle Diamond, Platinum and Gold Level SIs (Top SIs) with specialization in Oracle Applications CRM implementations, with a commitment to achieving Fusion CRM Implementation Specialization. Commitment expressed through an investment in a Center of Excellence/Innovation Center for Fusion CRM Applications. Individuals who will support the implementation practice as it is forming and will deliver Fusion CRM On Premise and Cloud Services implementations. Functional practice leaders, the future Fusion Application Wizards within the SI's organization. This Bootcamp is designed for people who: Will deliver Fusion CRM implementations Have had little or no exposure to Fusion CRM applications Are familiar with at least one other CRM application Have a business analyst level of technical background Prerequisites: Please note, that participants will be asked to take self-service-trainings (video format) and pass the related assessments prior to joining the Bootcamp. Fees: This event is FREE of charge for Oracle partners. When: 24 July – 26 July, 2012 (8:30 - 18:00 each day, including the last day; with recommended but optional evening events on all three days from 18:00 - 20:00 hrs) Where: Paris, France (Location to be defined) Travel: To make your travel hassel free, we kindly suggest you to plan your arrival to Paris on July 23rd and your departure on the 27th. Agenda: The final agenda and registration details will be issued closer to the event date.  

    Read the article

  • Oracle and Partners release CAMP specification for PaaS Management

    - by macoracle
    Cloud Application Management for Platforms The public release of the Cloud Application Management for Platforms (CAMP) specification, an initial draft of what is expected to become an industry standard self service interface specification for Platform as a Service (PaaS) management, represents a significant milestone in cloud standards development. Created by several players in the emerging cloud industry, including Oracle, the specification is being submitted to the OASIS standards organization (draft charter) where it will be finalized in an open development process. CAMP is targeted at application developers and deployers for self service management of their application on a Platform-as-a-Service cloud. It is closely aligned with the application development process where applications are typically developed in an Application Development Environment (ADE) and then deployed into a private or public platform cloud. CAMP standardizes the model behind an application’s dependencies on platform components and provides a standardized format for moving applications between the ADE and the cloud, and if and when desirable, between clouds. Once an application is deployed, CAMP provides users with a standardized self service interface to the PaaS offering, allowing the cloud consumer to manage the lifecycle of the application on that platform and the use of the underlying platform services. The CAMP interface includes a RESTful binding of the CAMP model onto the standard HTTP protocol, using JSON as the encoding for the model resources. The model for CAMP includes resources that represent the Application, its Components and any Platform Components that they depend on. It's important PaaS Cloud consumers understand that for a PaaS cloud, these are the abstractions that the user would prefer to work with, not Virtual Machines and the various resources such as compute power, storage and networking. PaaS cloud consumers would also not like to become system administrators for the infrastructure that is hosting their applications and component services. CAMP works on this more abstract level, and yet still accommodates platforms that are built using an underlying infrastructure cloud. With CAMP, it is up to the cloud provider whether or not this underlying infrastructure is exposed to the consumer. One major challenge addressed by the CAMP specification is that of ensuring that application deployment on a new platform is as seamless and error free as possible. This becomes even more difficult when the application may have been developed for a different platform and is now moving to a new one. In CAMP this is accomplished by matching the requirements of the application and its components to the specific capabilities of the underlying platform. This needs to be done regardless of whether there are existing pools of virtualized platform resources (such as a database pool) which are provisioned(on the basis of a schema for example), or whether the platform component is really just a set of virtual machines drawn from an infrastructure pool. The interoperability between platform clouds that CAMP offers means that a CAMP client such as an ADE can target multiple clouds with a single common interface. Applications can even be spread across multiple platform clouds and then managed without needing to create a specialized adapter to manage the components running in each cloud. The development of CAMP has been an effort by a small set of companies, but there are significant advantages to this approach. For example, the way that each of these companies creates their platforms is different enough, to ensure that CAMP can cover a wide range of actual deployments. CAMP is now entering the next phase of development under the guidance of an open standards organization, OASIS, which will likely broaden it’s capabilities. We hope is to keep it concise and minimal, however, to ease implementation and adoption. Over time there will be many different types of platform components that applications can use and which need management. CAMP at this point only includes one example of this (in an appendix) – DataBase as a Service. I am looking forward to the start of the CAMP Technical Committee in OASIS and will do my best to ensure a successful development process. Hope to see you there.

    Read the article

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    - by smisner
    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details. And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!) Multidimensional Thinking: The Presentation My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract: “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.” If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did! Kimball All the Way There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading. Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right! Schema Generation One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops. Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services. If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

    Read the article

  • [EF + Oracle] Entities

    - by JTorrecilla
    Prologue Following with the Serie I started yesterday about Entity Framework with Oracle, Today I am going to start talking about Entities. What is an Entity? A Entity is an object of the EF model corresponding to a record in a DB table. For example, let’s see, in Image 1 we can see one Entity from our model, and in the second one we can see the mapping done with the DB. (Image 1) (Image 2) More in depth a Entity is a Class inherited from the abstract class “EntityObject”, contained by the “System.Data.Objects.DataClasses” namespace. At the same time, this class inherits from the following Class and interfaces: StructuralObject: It is an Abstract class that inherits from INotifyPropertyChanging and INotifyPropertyChanged interfaces, and it exposes the events that manage the Changes of the class, and the functions related to check the data types of the Properties from our Entity.  IEntityWithKey: Interface which exposes the Key of the entity. IEntityWithChangeTracker: Interface which lets indicate the state of the entity (Detached, Modified, Added…) IEntityWithRelationships: Interface which indicates the relations about the entity. Which is the Content of a Entity? A Entity is composed by: Properties, Navigation Properties and Methods. What is a Property? A Entity Property is an object that represents a column from the mapped table from DB. It has a data type equivalent in .Net Framework to the DB Type. When we create the EF model, VS, internally, create the code for each Entity selected in the Tables step, such all methods that we will see in next steps. For each property, VS creates a structure similar to: · Private variable with the mapped Data type. · Function with a name like On{Property_Name}Changing({dataType} value): It manages the event which happens when we try to change the value. · Function with a name like On{Property_Name}Change: It manages the event raised when the property has changed successfully. · Property with Get and Set methods: The Set Method manages the private variable and do the following steps: Raise Changing event. Report the Entity is Changing. Set the prívate variable. For it, Use the SetValidValue function of the StructuralObject. There is a function for each datatype, and the functions takes 2 params: the value, and if the prop allow nulls. Invoke that the entity has been successfully changed. Invoke the Changed event of the Prop. ReportPropertyChanging and ReportPropertyChanged events, let, respectively, indicate that there is pending changes in the Entity, and the changes have success correctly. While the ReportPropertyChanged is raised, the Track State of the Entity will be changed. What is a Navigation Property? Navigation Properties are a kind of property of the type: EntityCollection<TEntity>, where TEntity is an Entity type from the model related with the current one, it is said, is a set of record from a related table in the DB. The EntityCollection class inherits from: · RelatedEnd: There is an abstract class that give the functions needed to obtein the related objects. · ICollection<TEntity> · IEnumerable<TEntity> · IEnumerable · IListSource For the previous interfaces, I wish recommend the following post from Jose Miguel Torres. Navigation properties allow us, to get and query easily objects related with the Entity. Methods? There is only one method in the Entity object. “Create{Entity}”, that allow us to create an object of the Entity by sending the parameters needed to create it. Finally After this chapter, we know what is an Entity, how is related to the DB and the relation to other Entities. In following chapters, we will se CRUD operations(Create, Read, Update, Delete).

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • Abstraction, Politics, and Software Architecture

    Abstraction can be defined as a general concept and/or idea that lack any concrete details. Throughout history this type of thinking has led to an array of new ideas and innovations as well as increased confusion and conspiracy. If one was to look back at our history they will see that abstraction has been used in various forms throughout our past. When I was growing up I do not know how many times I heard politicians say “Leave no child left behind” or “No child left behind” as a major part of their campaign rhetoric in regards to a stance on education. As you can see their slogan is a perfect example of abstraction because it only offers a very general concept about improving our education system but they do not mention how they would like to do it. If they did then they would be adding concrete details to their abstraction thus turning it in to an actual working plan as to how we as a society can help children succeed in school and in life, but then they would not be using abstraction. By now I sure you are thinking what does abstraction have to do with software architecture. You are valid in thinking this way, but abstraction is a wonderful tool used in information technology especially in the world of software architecture. Abstraction is one method of extracting the concepts of an idea so that it can be understood and discussed by others of varying technical abilities and backgrounds. One ways in which I tend to extract my architectural design thoughts is through the use of basic diagrams to convey an idea for a system or a new feature for an existing application. This allows me to generically model an architectural design through the use of views and Unified Markup Language (UML). UML is a standard method for creating a 4+1 Architectural View Models. The 4+1 Architectural View Model consists of 4 views typically created with UML as well as a general description of the concept that is being expressed by a model. The 4+1 Architectural View Model: Logical View: Models a system’s end-user functionality. Development View: Models a system as a collection of components and connectors to illustrate how it is intended to be developed.  Process View: Models the interaction between system components and connectors as to indicate the activities of a system. Physical View: Models the placement of the collection of components and connectors of a system within a physical environment. Recently I had to use the concept of abstraction to express an idea for implementing a new security framework on an existing website. My concept would add session based management in order to properly secure and allow page access based on valid user credentials and last user activity.  I created a basic Process View by using UML diagrams to communicate the basic process flow of my changes in the application so that all of the projects stakeholders would be able to understand my idea. Additionally I created a Logical View on a whiteboard while conveying the process workflow with a few stakeholders to show how end-user will be affected by the new framework and gaining additional input about the design. After my Logical and Process Views were accepted I then started on creating a more detailed Development View in order to map how the system will be built based on the concept of components and connections based on the previously defined interactions. I really did not need to create a Physical view for this idea because we were updating an existing system that was already deployed based on an existing Physical View. What do you think about the use of abstraction in the development of software architecture? Please let me know.

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • How to resize / enlarge / grow a non-LVM ext4 partition

    - by Mischa
    I have already searched the forums, but couldnt find a good suitable answer: I have an Ubuntu Server 10.04 as KVM Host and a guest system, that also runs 10.04. The host system uses LVM and there are three logical volumes, which are provided to the guest as virtual block devices - one for /, one for /home and one for swap. The guest had been partitioned without LVM. I have already enlarged the logical volume in the host system - the guest successfully sees the bigger virtual disk. However, this virtual disk contains one "good old" partition, which still has the old small size. The output of fdisk -l is me@produktion:/$ LC_ALL=en_US sudo fdisk -l Disk /dev/vda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c8ce7 Device Boot Start End Blocks Id System /dev/vda1 * 1 3917 31455232 83 Linux Disk /dev/vdb: 2147 MB, 2147483648 bytes 244 heads, 47 sectors/track, 365 cylinders Units = cylinders of 11468 * 512 = 5871616 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2bf7 Device Boot Start End Blocks Id System /dev/vdb1 1 366 2095104 82 Linux swap / Solaris Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 32, 33) logical=(0, 43, 28) Partition 1 has different physical/logical endings: phys=(260, 243, 47) logical=(365, 136, 44) Disk /dev/vdc: 225.5 GB, 225485783040 bytes 255 heads, 63 sectors/track, 27413 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00027f25 Device Boot Start End Blocks Id System /dev/vdc1 1 9138 73398272 83 Linux The output of parted print all is Model: Virtio Block Device (virtblk) Disk /dev/vda: 32.2GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 32.2GB 32.2GB primary ext4 boot Model: Virtio Block Device (virtblk) Disk /dev/vdb: 2147MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 2146MB 2145MB primary linux-swap(v1) Model: Virtio Block Device (virtblk) Disk /dev/vdc: 225GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 75.2GB 75.2GB primary ext4 What I want to achieve is to simply grow or resize the partition /dev/vdc1 so that it uses the whole space provided by the virtual block device /dev/vdc. The problem is, that when I try to do that with parted, it complains: (parted) select /dev/vdc Using /dev/vdc (parted) print Model: Virtio Block Device (virtblk) Disk /dev/vdc: 225GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 75.2GB 75.2GB primary ext4 (parted) resize 1 WARNING: you are attempting to use parted to operate on (resize) a file system. parted's file system manipulation code is not as robust as what you'll find in dedicated, file-system-specific packages like e2fsprogs. We recommend you use parted only to manipulate partition tables, whenever possible. Support for performing most operations on most types of file systems will be removed in an upcoming release. Start? [1049kB]? End? [75.2GB]? 224GB Error: File system has an incompatible feature enabled. Compatible features are has_journal, dir_index, filetype, sparse_super and large_file. Use tune2fs or debugfs to remove features. So what can I do? This is a headless production system. What is a safe way to grow this partition? I CAN unmount it, though - so this is not the problem.

    Read the article

  • Need WIF Training?

    - by Your DisplayName here!
    I spend numerous hours every month answering questions about WIF and identity in general. This made me realize that this is still quite a complicated topic once you go beyond the standard fedutil stuff. My good friend Brock and I put together a two day training course about WIF that covers everything we think is important. The course includes extensive lab material where you take standard application and apply all kinds of claims and federation techniques and technologies like WS-Federation, WS-Trust, session management, delegation, home realm discovery, multiple identity providers, Access Control Service, REST, SWT and OAuth. The lab also includes the latest version of the thinktecture identityserver and you will learn how to use and customize it. If you are looking for an open enrollment style of training, have a look here. Or contact me directly! The course outline looks as follows: Day 1 Intro to Claims-based Identity & the Windows Identity Foundation WIF introduces important concepts like conversion of security tokens and credentials to claims, claims transformation and claims-based authorization. In this module you will learn the basics of the WIF programming model and how WIF integrates into existing .NET code. Externalizing Authentication for Web Applications WIF includes support for the WS-Federation protocol. This protocol allows separating business and authentication logic into separate (distributed) applications. The authentication part is called identity provider or in more general terms - a security token service. This module looks at this scenario both from an application and identity provider point of view and walks you through the necessary concepts to centralize application login logic both using a standard product like Active Directory Federation Services as well as a custom token service using WIF’s API support. Externalizing Authentication for SOAP Services One big benefit of WIF is that it unifies the security programming model for ASP.NET and WCF. In the spirit of the preceding modules, we will have a look at how WIF integrates into the (SOAP) web service world. You will learn how to separate authentication into a separate service using the WS-Trust protocol and how WIF can simplify the WCF security model and extensibility API. Day 2 Advanced Topics:  Security Token Service Architecture, Delegation and Federation The preceding modules covered the 80/20 cases of WIF in combination with ASP.NET and WCF. In many scenarios this is just the tip of the iceberg. Especially when two business partners decide to federate, you usually have to deal with multiple token services and their implications in application design. Identity delegation is a feature that allows transporting the client identity over a chain of service invocations to make authorization decisions over multiple hops. In addition you will learn about the principal architecture of a STS, how to customize the one that comes with this training course, as well as how to build your own. Outsourcing Authentication:  Windows Azure & the Azure AppFabric Access Control Service Microsoft provides a multi-tenant security token service as part of the Azure platform cloud offering. This is an interesting product because it allows to outsource vital infrastructure services to a managed environment that guarantees uptime and scalability. Another advantage of the Access Control Service is, that it allows easy integration of both the “enterprise” protocols like WS-* as well as “web identities” like LiveID, Google or Facebook into your applications. ACS acts as a protocol bridge in this case where the application developer doesn’t need to implement all these protocols, but simply uses a service to make it happen. Claims & Federation for the Web and Mobile World Also the web & mobile world moves to a token and claims-based model. While the mechanics are almost identical, other protocols and token types are used to achieve better HTTP (REST) and JavaScript integration for in-browser applications and small footprint devices. Also patterns like how to allow third party applications to work with your data without having to disclose your credentials are important concepts in these application types. The nice thing about WIF and its powerful base APIs and abstractions is that it can shield application logic from these details while you can focus on implementing the actual application. HTH

    Read the article

  • The Future of Air Travel: Intelligence and Automation

    - by BobEvans
    Remember those white-knuckle flights through stormy weather where unexpected plunges in altitude result in near-permanent relocations of major internal organs? Perhaps there’s a better way, according to a recent Wall Street Journal article: “Pilots of a Honeywell International Inc. test plane stayed on their initial flight path, relying on the company's latest onboard radar technology to steer through the worst of the weather. The specially outfitted Boeing 757 barely shuddered as it gingerly skirted some of the most ferocious storm cells over Fort Walton Beach and then climbed above the rest in zero visibility.” Or how about the multifaceted check-in process, which might not wreak havoc on liver location but nevertheless makes you wonder if you’ve been trapped in some sort of covert psychological-stress test? Another WSJ article, called “The Self-Service Airport,” says there’s reason for hope there as well: “Airlines are laying the groundwork for the next big step in the airport experience: a trip from the curb to the plane without interacting with a single airline employee. At the airport of the near future, ‘your first interaction could be with a flight attendant,’ said Ben Minicucci, chief operating officer of Alaska Airlines, a unit of Alaska Air Group Inc.” And in the topsy-turvy world of air travel, it’s not just the passengers who’ve been experiencing bumpy rides: the airlines themselves are grappling with a range of challenges—some beyond their control, some not—that make profitability increasingly elusive in spite of heavy demand for their services. A recent piece in The Economist illustrates one of the mega-challenges confronting the airline industry via a striking set of contrasting and very large numbers: while the airlines pay $7 billion per year to third-party computerized reservation services, the airlines themselves earn a collective profit of only $3 billion per year. In that context, the anecdotes above point unmistakably to the future that airlines must pursue if they hope to be able to manage some of the factors outside of their control (e.g., weather) as well as all of those within their control (operating expenses, end-to-end visibility, safety, load optimization, etc.): more intelligence, more automation, more interconnectedness, and more real-time awareness of every facet of their operations. Those moves will benefit both passengers and the air carriers, says the WSJ piece on The Self-Service Airport: “Airlines say the advanced technology will quicken the airport experience for seasoned travelers—shaving a minute or two from the checked-baggage process alone—while freeing airline employees to focus on fliers with questions. ‘It's more about throughput with the resources you have than getting rid of humans,’ said Andrew O'Connor, director of airport solutions at Geneva-based airline IT provider SITA.” Oracle’s attempting to help airlines gain control over these challenges by blending together a range of its technologies into a solution called the Oracle Airline Data Model, which suggests the following steps: • To retain and grow their customer base, airlines need to focus on the customer experience. • To personalize and differentiate the customer experience, airlines need to effectively manage their passenger data. • The Oracle Airline Data Model can help airlines jump-start their customer-experience initiatives by consolidating passenger data into a customer data hub that drives realtime business intelligence and strategic customer insight. • Oracle’s Airline Data Model brings together multiple types of data that can jumpstart your data-warehousing project with rich out-of-the-box functionality. • Oracle’s Intelligent Warehouse for Airlines brings together the powerful capabilities of Oracle Exadata and the Oracle Airline Data Model to give you real-time strategic insights into passenger demand, revenues, sales channels and your flight network. The airline industry aside, the bullet points above offer a broad strategic outline for just about any industry because the customer experience is becoming pre-eminent in each and there is simply no way to deliver world-class customer experiences unless a company can capture, manage, and analyze all of the relevant data in real-time. I’ll leave you with two thoughts from the WSJ article about the new in-flight radar system from Honeywell: first, studies show that a single episode of serious turbulence can wrack up $150,000 in additional costs for an airline—so, it certainly behooves the carriers to gain the intelligence to avoid turbulence as much as possible. And second, it’s back to that top-priority customer-experience thing and the value that ever-increasing levels of intelligence can deliver. As the article says: “In the cabin, reporters watched screens showing the most intense parts of the nearly 10-mile wide storm, which churned some 7,000 feet below, in vibrant red and other colors. The screens also were filled with tiny symbols depicting likely locations of lightning and hail, which can damage planes and wreak havoc on the nerves of white-knuckle flyers.”  (Bob Evans is senior vice-president, communications, for Oracle.)  

    Read the article

  • How to convert pitch and yaw to x, y, z rotations?

    - by Aaron Anodide
    I'm a beginner using XNA to try and make a 3D Asteroids game. I'm really close to having my space ship drive around as if it had thrusters for pitch and yaw. The problem is I can't quite figure out how to translate the rotations, for instance, when I pitch forward 45 degrees and then start to turn - in this case there should be rotation being applied to all three directions to get the "diagonal yaw" - right? I thought I had it right with the calculations below, but they cause a partly pitched forward ship to wobble instead of turn.... :( So my quesiton is: how do you calculate the X, Y, and Z rotations for an object in terms of pitch and yaw? Here's current (almost working) calculations for the Rotation acceleration: float accel = .75f; // Thrust +Y / Forward if (currentKeyboardState.IsKeyDown(Keys.I)) { this.ship.AccelerationY += (float)Math.Cos(this.ship.RotationZ) * accel; this.ship.AccelerationX += (float)Math.Sin(this.ship.RotationZ) * -accel; this.ship.AccelerationZ += (float)Math.Sin(this.ship.RotationX) * accel; } // Rotation +Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.J)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * accel; } // Rotation -Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.K)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * -accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * -accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * -accel; } // Rotation +X / Pitch if (currentKeyboardState.IsKeyDown(Keys.F)) { this.ship.RotationAccelerationX += accel; } // Rotation -X / Pitch if (currentKeyboardState.IsKeyDown(Keys.D)) { this.ship.RotationAccelerationX -= accel; } I'm combining that with drawing code that does a rotation to the model: public void Draw(Matrix world, Matrix view, Matrix projection, TimeSpan elsapsedTime) { float seconds = (float)elsapsedTime.TotalSeconds; // update velocity based on acceleration this.VelocityX += this.AccelerationX * seconds; this.VelocityY += this.AccelerationY * seconds; this.VelocityZ += this.AccelerationZ * seconds; // update position based on velocity this.PositionX += this.VelocityX * seconds; this.PositionY += this.VelocityY * seconds; this.PositionZ += this.VelocityZ * seconds; // update rotational velocity based on rotational acceleration this.RotationVelocityX += this.RotationAccelerationX * seconds; this.RotationVelocityY += this.RotationAccelerationY * seconds; this.RotationVelocityZ += this.RotationAccelerationZ * seconds; // update rotation based on rotational velocity this.RotationX += this.RotationVelocityX * seconds; this.RotationY += this.RotationVelocityY * seconds; this.RotationZ += this.RotationVelocityZ * seconds; Matrix translation = Matrix.CreateTranslation(PositionX, PositionY, PositionZ); Matrix rotation = Matrix.CreateRotationX(RotationX) * Matrix.CreateRotationY(RotationY) * Matrix.CreateRotationZ(RotationZ); model.Root.Transform = rotation * translation * world; model.CopyAbsoluteBoneTransformsTo(boneTransforms); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } }

    Read the article

  • Invalid keystore format with SSL in Tomcat 6

    - by strauberry
    I'm trying to setup SSL in my local Tomcat 6 installation. For this, I followed the official How-To doing the following: $JAVA_HOME/bin/keytool -genkey -v -keyalg RSA -alias tomcat -keypass changeit -storepass changeit $JAVA_HOME/bin/keytool -export -alias tomcat -storepass changeit -file /root/server.crt Then changing the $CATALINA_BASE/conf/server.xml, in-commenting this: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/root/.keystore" keystorePass="changeit" /> After starting Tomcat, I get this Exception: INFO: Initializing Coyote HTTP/1.1 on http-8080 30.06.2011 10:15:24 org.apache.tomcat.util.net.jsse.JSSESocketFactory getStore SCHWERWIEGEND: Failed to load keystore type JKS with path /root/.keystore due to Invalid keystore format java.io.IOException: Invalid keystore format at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:633) at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:38) at java.security.KeyStore.load(KeyStore.java:1185) When I look into the keystore with keytool -list I get root@host:~# $JAVA_HOME/bin/keytool -list Enter key store password: changeit Key store type: gkr Key store provider: GNU-CRYPTO Key store contains 1 entry(ies) Alias name: tomcat Creation timestamp: Donnerstag, 30. Juni 2011 - 10:13:40 MESZ Entry type: key-entry Certificate fingerprint (MD5): 6A:B9:...C:89:1C Obviously, the keystore types are different. How can I change the type and will this fix my problem? Thank you!

    Read the article

  • Get-ChildItem fails to connect in SQLSERVER drive

    - by Norman Kelm
    I'm having some trouble with the SQLSERVER PSDRIVE. See error below. I only have named instances on my PC, both 2005 and 2008 Added the SQL snapins. The PC is named YODA The SQL instance is SQL2008 Navigate to the Databases folder for YODA\SQL2008. You can see the path below. dir -name spits out a connection error trying to connect to YODASQL2008\DEFAULT when it should be trying to connect to YODA\SQL2008. Then it outputs the db name which is Twitter in this case. Is there something missing from my config? Output: PS SQLSERVER:\SQL\YODA\SQL2008\Databases dir -name Get-ChildItem : SQL Server PowerShell provider error: Could not connect to 'YODASQL2008\DEFAULT'. [Failed to connect to server YODASQL2008. -- A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)] At line:1 char:4 + dir <<<< -name + CategoryInfo : OpenError: (SQLSERVER:\SQL\...tabases\Twitter:SqlPath) [Get-ChildItem], GenericProviderException + FullyQualifiedErrorId : ConnectFailed,Microsoft.PowerShell.Commands.GetChildItemCommand Twitter Repeats with error for every database. Thanks, Norman

    Read the article

  • linked-server sql - access

    - by user22121
    Hi, I have a SQL server 2000 and an Access database mdb connected by Linked server on the other hand I have a program in c # that updates data in a SQL table (Users) based data base access. When running my program returns the following error message: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. Authentication failed. [OLE / DB provider returned message: Can not start the application. Missing information file of the working group or is opened exclusively by another user.] OLE DB error trace [OLE / DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize:: Initialize returned 0x80040E4D: Authentication failed.] . Both the program, the sql server and database access are on a remote server. On the local server the problem was solved by running the following: "sp_addlinkedsrvlogin 'ActSC', 'false', NULL, 'admin', NULL". Try on the remote server the next, without result: "sp_addlinkedsrvlogin 'ActSC', true, null, 'user', 'pass'". On the remote server and from the "Query Analyzer" sql update statements are working correctly. Can you think of what may be the problem? Thanks!

    Read the article

  • Why should one have a secondary DNS server?

    - by Sam Levin
    I'm very confused. I basically understand how DNS works. Here's an example that helps illustrate what I'm having trouble understanding. Right now, I run a small web-server. I use my provider's DNS manager, so I don't have a DNS server hosted on the machine. Let's say for a second, that I don't use my host's DNS, and I decide to set up a DNS server on my server. Hypothetical scenario: my server (entire) server goes down - DNS included. Why do I need backup DNS? If the server is down, who cares if the DNS server is down too, considering that even if I had DNS up (it wasn't on the crashed server), it wouldn't be able to forward requests anyway since the server would be down? Is the point of having secondary DNS, to be able to change the IP addresses that your DNS server points to, so if your webserver was down, you could redirect traffic to a backup? How would you switch to the secondary provider, in the event that your main DNS provider becomes unavailable? Is a backup DNS system basically up all the time? How is it configured? Is it just an exact clone of the DNS server you would have on your server? Do they run simultaneously? Hopefully someone can see what I'm hung up on, and provide some guidance. Thanks

    Read the article

  • IP address from a MAC address

    - by acermate433s
    I'm writing a class to integrate a POS card reader device to our software. In order for it to work I must know what IP it's using. We were given some sample code by the service provider and the way they do this is they open a website (http://www.ebizchargeemv.com/getip.php?mac={MAC address of device}) and it would return the IP address of the device. The device I'm using is a POSLynx220 Mini. It has an ethernet port that connects to the internet to communicate with the service provider. I send TCP data to it and the device then controls a PIN pad that prompts a client to swipe his card. It's probably a mini computer that communicates with the service provider and uses the PIN pad as its input device. Just being curious but how did they implement this? Are they implementing it using ARP? I'm planning on not using their website to determine the IP of the device. I've seen some code that uses ARP but using executing ARP in one of the PC didn't detect the POS device.

    Read the article

  • Microsoft , Hotmail , Live , MSN, Outlook , unable to send emails and no support received from microsoft in 3 months we are trying asking for that

    - by bombastic
    Ok this is somenthing unbelievable, we have a website, users sign up and receives links to confirm they signed up BUT: 1 - microsoft blocked our IP (no one with microsoft email account can receive our emails) 2 - we tryed contacting microsoft submitting the detailed form about our problem 3 - we posted 3 times in their community about our problem 4 - we tweeted they about our problem 5 - we tryed finding out some telephone support number (the few there are arent' helping at all) Do you think we solved? the answer is NO :/ We still unable to send emails from our IP to microsoft email accounts, since 3 months back. Our emails are perfect we checked all the email headers following microsoft guidelines but it seems not enought, checking our IP reputation it seems everythings ok, indeed we can send email easly to any other provider , gmail, yahoo, etc Do you know any other way to try to get help ? FULL STACK ERROR FROM MICROSOFT: host mx1.hotmail.com[65.55.37.120] said: 550 SC-001 (COL0-MC4-F28) Unfortunately, messages from 94.23.***** weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command) We are running a Virtual Private Server , so no HOSTING SITE, using NGINX too

    Read the article

  • Unable to send mail to hotmail from rackspace cloud

    - by Jo Erlang
    I'm having issue sending mail from postfix on a rackspace cloud instance for my domain. Hotmail says "550 SC-001 (SNT0-MC4-F35) Unfortunately, messages from 198.101.x.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. " Here is the mail log Sep 20 08:02:59 mydomain postfix/smtpd[1810]: disconnect from localhost[127.0.0.1] Sep 20 08:02:59 mydomain postfix/smtp[1814]: 59CFF4B191: to=<[email protected]>, relay=mx3.hotmail.com[65.55.92.184]:25, delay=0.19, delays=0.1/0.01/0.06/0.01, dsn=5.0.0, status=bounced (host mx3.hotmail.com[65.55.92.184] said: 550 SC-001 (SNT0-MC4-F35) Unfortunately, messages from 198.101.x.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command)) Sep 20 08:02:59 mydomain postfix/smtp[1814]: 59CFF4B191: lost connection with mx3.hotmail.com[65.55.92.184] while sending RCPT TO I have implemented rDNS, SPF and DKIM they all are looking fine. I have checked my IP and domain, on most of the spam black lists and it is listed as ok on those, (not listed as spamming IP) What should I try next?

    Read the article

  • Domain names timing out after VPS IP change

    - by Fourjays
    I rent a CentOS 5 VPS from a UK-based provider, with DirectAdmin also installed. On Thursday night, they carried out planned maintenance to changed the two IPs I had been assigned to two new ones. On Friday, after the change had taken place, I updated my domain name records to reflect the IP change. Since then, all of the domains pointing to the VPS are timing out. Additionally, DirectAdmin was also not responding, but was was resolved by running the ipswap scripts as found in the DirectAdmin knowledgebase. It did not fix my domains though. I have contacted the VPS provider but I have been waiting for a response for some time now. I have checked again, and again, and all the IPs referenced in DirectAdmin are correct. If I go to the server IP in my browser it responds with "Apache is functioning normally." Email accounts on the server are also functioning correctly. But if I access a domain itself, it times out. Running a ping and a DNS look-up, I can confirm the nameserver IPs are correct. If I run a trace route it reaches an IP that is similar to my VPS IPs (last 2 blocks are different) before timing out (it never shows my server IP). I am relatively new to VPS management so don't have a vast wealth of experience with troubleshooting problems on them. I have checked all of the httpd configuration files, which don't seem to have any IP references in them at all. Looking in the Apache error logs, what errors there are do not coincide with times I have tried to access the site. Is this issue at my provider's end? Is there anything else I can check or test, to rule out post-IP-change problems with my server configuration? It was all running fine prior to the IP change.

    Read the article

  • switching dns server providers

    - by Yoav Aner
    I'm trying to wrap my head around something that I thought I kinda understood, but clearly there's some piece missing. We're currently using Zerigo as our primary dns, with slave dns running on linode. This works quite well. However, recent DDOS attacks on zerigo meant that whilst dns queries were still resolved, we were unable to make any dns changes. Since we rely on dns changes on our own infrastructure, I'm looking to improve this somehow. I'd rather not ditch zerigo completely, and realise that this or similar problems can happen with ANY primary dns hosting provider. It might not be DDOS, but a bug on their server, or something that means we can no longer issue updates. For this I want to have some fallback option: a completely independent (primary) dns provider (maybe AWS), which we will keep in-sync manually. We will switch-over to it when there's a problem. This brings me to my question: How do I make sure we can switch those providers quickly enough? specifically, on our registrar, there's a list of name servers, but no settings like TTL etc. How do dns clients know to use the newly updated name server records? Is this configured in the SOA? However, the SOA itself is hosted with the dns provider and we might not be able to update it... This is not a question about a one-time move, which can be planned and scheduled and tested, but rather to be able to do so when things are half-broken.

    Read the article

  • How can you connect to a SQL Server not on your domain?

    - by scotty2012
    I have a test machine that's not allowed on our domain because we are testing corporately unsupported applications (SQL 2008 and Server 2008). I want to use management studio to connect to the SQL2008 server but can't get it working. I have authentication set to mixed-mode, I've checked 'allow remote connections to this server', but when I try to access it, I get the error A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53) Since it says the provider is Named Pipes, I enabled Named Pipes on the server, but still no dice. I've tried connecting to the system name, the IP, the system name\instance and IP\instance, all to no avail. Is what I'm trying to do not possible? Edit: Well, through some basic troubleshooting, I've found that I can't ping the server from my client computer, but I can ping the client computer from the server? They are both plugged into the same switch, and are sitting next to each other. The windows firewall on the server is turned on, is there some specific settings I need to enable? DAH! So it was the firewall blocking me. How can I enable the firewall and still connect?

    Read the article

  • Exchange 2003 - how to route ALL mail (including internal) via an external SMTP gateway? (Or, domain

    - by Scandalon
    Short version: Is there a way to have Exchange route all email, including internal AD users that would normally be routed directly, through an external gateway? (SMTP, probably a "Smart Host" in exchange nomenclature.) Longer version: I'm not an email expert/admin/orevencompetent. Inherited an exchange 2003 server, migrating to web-based SaaS provider. To add to the fun, we're also (forced by deadlines) transitioning domains. What we (my boss) wants is any email sent to the new domain to have a copy sent to both domains. Getting mail sent to the new domain/provider to then be copied/forwarded to our old domain/exchange is easy. But we want mail sent from the old domain to the old domain to get sent to the new domain as well. However: If we route all outgoing exchange mail through the new provider gateway, w/ the new domain forwarding to the old, we'd get an email loop. The "solution" desired is for an exchange user that sends to another exchange user to still be sent via the external gateway, which would in turn be sent to the new domain, and copied/forwarded back to the old domain. Is it possible? A bit of a strange request I'm sure. And I expect that what we're attempting to do is DoingItWrong(tm). Any better ideas?

    Read the article

  • Upgrading MySQL Connector/Net

    - by Todd Grover
    I am trying to publish a website with our hosting provider. I am getting error due to the fact that they only allow a medium trust and the MySQL Connector/Net that I am using requires reflection to work. Unfortunately, reflection is not allowed in a medium trust. After some research I found out that the newest version of the MySQL Connector/Net may solve this problem. Connector/Net 6.6 includes enhancements to partial trust support to allow hosting services to deploy applications without installing the Connector/Net library in the GAC. I am thinking that will solve my problem. So, I unistalled MySQL Connector/Net 6.4.4 and I installed MySQL Connector/Net 6.6.4. When I run the application in Visual Studio 2010 I get the error: ProviderIncompatibleException was unhandled by user code The message is An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. Check the inner exceptions for details and ensure that the connection string is correct. InnerException is The provider did not return a ProviderManifestToken string. Everything works fine when I have Connector/Net 6.4.4 installed. I can access the database and perform Read/Write/Delete action against it. I have a reference to the following in the project: MySql.Data MySql.Data.Entity MySql.Web My connection string in Web.config <connectionStrings> <add name="AESSmartEntities" connectionString="server=ec2-xxx-xx-xxx-xx.compute-1.amazonaws.com; user=root; database=nunya; port=3306; password=xxxxxxx;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> What might I be doing wrong? Do I need any additional setting(s) to work with version 6.6.4 that wasn't required in the older version 6.4.4?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >