Search Results

Search found 13586 results on 544 pages for 'trusted domain'.

Page 196/544 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • HTTPS load balancing based on some component of the URL

    - by user38118
    We have an existing application that we wish to split across multiple servers (for example: 1000 users total, 100 users split across 10 servers). Ideally, we'd like to be able relay the HTTPS requests to a particular server based on some component of the URL. For example: Users 1 through 100 go to http://server1.domain.com/ Users 2 through 200 go to http://server2.domain.com/ etc. etc. etc. Where the incoming requests look like this: https://secure.domain.com/user/{integer user # goes here}/path/to/file Does anyone know of an easy way to do this? Pound looks promising... but it doesn't look like it supports routing based on URL like this. Even better would be if it didn't need to be hard-coded- The load balancer could make a separate HTTP request to another server to ask "Hey, what server should I relay to for a request to URL {the URL that was requested goes here}?" and relay to the hostname returned in the HTTP response.

    Read the article

  • What is the most secure way to set up a mysql user for Wordpress?

    - by Sinthia V
    I am setting up Subdomain based MU on my domain.Everything is hosted by me running on one CentOS/Webmin VPS. Will I be better off setting the MySQL user's domain as localhost, 127.0.0.1 or with a wildcard %.mydomain.com? Which is more secure? Is localhost === 127.0.0.1? If not what is the difference? Also, what is my domain from MySQL's or Wordpress' pov when I am connected by ssh terminal? How about When I connect by Webmin or Usermin? Does MySQL see me as Webmin or my Unix user?

    Read the article

  • Bridging two networks

    - by Jukodan
    I'm hoping you may be able to offer some advice as I'm not very familiar with setting up routers/access points. I have a network of computers on an active directory domain on the 192.NET. I then have another network on the 10.NET that needs to have access to the domain on the 192.NET. I am using cisco/linksys routers. What methodology would you suggest so that these two can communicate and I can add the computers form the 10.NET to the domain? Edit: Basically, I'm having trouble figuring out how to setup a static route

    Read the article

  • Can I change the user that is the default choice in UAC?

    - by Will
    Windows 7 install. I RANU, so I have to hit the UAC every once in awhile. The problem is that it asks me to enter my password to elevate, but I need to enter the domain\username of the box admin (I'm on a domain) and the password. Instead of UAC popping up with my username entered and the caret in the password box, I'd like it to pop up with the domain\username of a different user, specifically the local admin account, entered. This would save me a click and some typing. Sue me, I'm lazy. Is this possible?

    Read the article

  • "Access is denied" JavaScript error when trying to access the document object of a programmatically-

    - by Bungle
    I have project in which I need to create an <iframe> element using JavaScript and append it to the DOM. After that, I need to insert some content into the <iframe>. It's a widget that will be embedded in third-party websites. I don't set the "src" attribute of the <iframe> since I don't want to load a page; rather, it is used to isolate/sandbox the content that I insert into it so that I don't run into CSS or JavaScript conflicts with the parent page. I'm using JSONP to load some HTML content from a server and insert it in this <iframe>. I have this working fine, with one serious exception - if the document.domain property is set in the parent page (which it may be in certain environments in which this widget is deployed), Internet Explorer (probably all versions, but I've confirmed in 6, 7, and 8) gives me an "Access is denied" error when I try to access the document object of this <iframe> I've created. It doesn't happen in any other browsers I've tested in (all major modern ones). This makes some sense, since I'm aware that Internet Explorer requires you to set the document.domain of all windows/frames that will communicate with each other to the same value. However, I'm not aware of any way to set this value on a document that I can't access. Is anyone aware of a way to do this - somehow set the document.domain property of this dynamically created <iframe>? Or am I not looking at it from the right angle - is there another way to achieve what I'm going for without running into this problem? I do need to use an <iframe> in any case, as the isolated/sandboxed window is crucial to the functionality of this widget. Here's my test code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Document.domain Test</title> <script type="text/javascript"> document.domain = 'onespot.com'; // set the page's document.domain </script> </head> <body> <p>This is a paragraph above the &lt;iframe&gt;.</p> <div id="placeholder"></div> <p>This is a paragraph below the &lt;iframe&gt;.</p> <script type="text/javascript"> var iframe = document.createElement('iframe'), doc; // create <iframe> element document.getElementById('placeholder').appendChild(iframe); // append <iframe> element to the placeholder element setTimeout(function() { // set a timeout to give browsers a chance to recognize the <iframe> doc = iframe.contentWindow || iframe.contentDocument; // get a handle on the <iframe> document alert(doc); if (doc.document) { // HEREIN LIES THE PROBLEM doc = doc.document; } doc.body.innerHTML = '<h1>Hello!</h1>'; // add an element }, 10); </script> </body> </html> I've hosted it at: http://troy.onespot.com/static/access_denied.html As you'll see if you load this page in IE, at the point that I call alert(), I do have a handle on the window object of the <iframe>; I just can't get any deeper, into its document object. Thanks very much for any help or suggestions! I'll be indebted to whomever can help me find a solution to this.

    Read the article

  • nhibernate sql Express connection issue - error: 26 - Error Locating Server/Instance Specified

    - by frosty
    I can connect fine with normal ado.net. However i get the following error when i tried to connect nHibernate. hibernate.cfg.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string">Server=xxxxx\SQLEXPRESS; Database=xxxxx; User ID=xxxxx; Password=xxxxx; Trusted_Connection=True</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> <property name="show_sql">true</property> </session-factory> </hibernate-configuration> Server error A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Full stack [SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4845255 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) +4858557 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) +90 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) +342 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +221 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +189 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +185 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +31 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +433 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +66 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +499 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +65 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +117 System.Data.SqlClient.SqlConnection.Open() +122 NHibernate.Connection.DriverConnectionProvider.GetConnection() +102 NHibernate.Tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.Prepare() +15 NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.GetReservedWords(Dialect dialect, IConnectionHelper connectionHelper) +65 NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.Update(ISessionFactory sessionFactory) +80 NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) +599 NHibernate.Cfg.Configuration.BuildSessionFactory() +87 XXX.Domain.Repositories.NHibernateHelper.get_SessionFactory() in D:\dev\MyProject\XXX\XXX.Domain\Repositories\NHibernateHelper.cs:23 XXX.Domain.Repositories.NHibernateHelper.OpenSession() in D:\dev\MyProject\XXX\XXX.Domain\Repositories\NHibernateHelper.cs:31 XXX.Domain.Repositories.EntryRepository.GetCountByGmapId(Int32 gmapId) in D:\dev\MyProject\XXX\XXX.Domain\Repositories\EntryRepository.cs:152 XXX.Controls.Activity.BindRepeater(Int32 id) in D:\dev\MyProject\XXX\XXX.Controls\Activity.ascx.cs:58 XXX.Controls.Activity.DropDownListMaps_SelectedIndexChanged(Object sender, EventArgs e) in D:\dev\MyProject\XXX\XXX.Controls\Activity.ascx.cs:75 System.Web.UI.WebControls.ListControl.OnSelectedIndexChanged(EventArgs e) +111 System.Web.UI.WebControls.DropDownList.RaisePostDataChangedEvent() +134 System.Web.UI.WebControls.DropDownList.System.Web.UI.IPostBackDataHandler.RaisePostDataChangedEvent() +10 System.Web.UI.Page.RaiseChangedEvents() +165 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1485

    Read the article

  • Need help with regex parsing (in perl)

    - by Charlie
    Hi all, need some help parsing an html file in perl. I used the LWP module to retrieve a webpage into $_ with $/ undefined so there are no newline issues. Then I'm trying to find all strings matching a pattern. How do I do that? I know how to find 1 instance of it, but how do I match all instances? and what data structure would the results go to? a multi dimensional array? my text (excerpt) looks like the following: <TR> <TD BGCOLOR=EEEEEE><A HREF="/program.cgi?pid=1233"><FONT FACE="ARIAL,HELVETICA,SANS-SERIF" SIZE=2>Title 1</A></FONT></TD> <TD BGCOLOR=EEEEEE nowrap><FONT FACE="ARIAL,HELVETICA" SIZE=2>Jun 27 2010 3:00PM</FONT></TD> <TD BGCOLOR=EEEEEE>&nbsp;</TD> </TR> <TR><TD BGCOLOR=EEEEEE COLSPAN=3><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR><TD COLSPAN=3 BGCOLOR=999999><IMG SRC="http://images.domain.com/images/spacer.gif" HEIGHT=1 WIDTH=1></TD></TR> <TR><TD COLSPAN=3 ><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR> <TD><A HREF="/program.cgi?pid=1234"><FONT FACE="ARIAL,HELVETICA,SANS-SERIF" SIZE=2>Title 2</A></FONT></TD> <TD nowrap><FONT FACE="ARIAL,HELVETICA" SIZE=2>Jun 29 2010 7:00PM</FONT></TD> <TD>&nbsp;</TD> </TR> <TR><TD COLSPAN=3><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR><TD COLSPAN=3 BGCOLOR=999999><IMG SRC="http://images.domain.com/images/spacer.gif" HEIGHT=1 WIDTH=1></TD></TR> <TR><TD COLSPAN=3 BGCOLOR=EEEEEE><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR> <TD BGCOLOR=EEEEEE><A HREF="/program.cgi?pid=1235"><FONT FACE="ARIAL,HELVETICA,SANS-SERIF" SIZE=2>Title 3</A></FONT></TD> <TD BGCOLOR=EEEEEE nowrap><FONT FACE="ARIAL,HELVETICA" SIZE=2>Jul 3 2010 7:00PM</FONT></TD> <TD BGCOLOR=EEEEEE>&nbsp;</TD> </TR> I want to get the following into an array (or any structure): { ["/program.cgi?pdi=1233", "Title 1"], ["/program.cgi?pdi=1234", "Title 2"], ["/program.cgi?pdi=1235", "Title 3"] } Thanks

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • Uploading documents to WSS (Windows Sharepoint Services) using SSIS

    - by Randy Aldrich Paulo
    Recently I was tasked to create an SSIS application that will query a database, split the results with certain criteria and create CSV file for every result and upload the file to a Sharepoint Document Library site. I've search the web and compiled the steps I've taken to build the solution. Summary: A) Create a proxy class of WSS Copy.asmx. B) Create a wrapper class for the proxy class and add a mechanism to check if the file is existing and delete method. C) Create an SSIS and call the wrapper class to transfer the files.   A) Creating Proxy Class 1) Go to Visual Studio Command Prompt type wsdl http://[sharepoint site]/_vti_bin/Copy.asmx this will generate the proxy class (Copy.cs) that will be added to the solution. 2) Add Copy.cs to solution and create another constructor for Copy() that will accept additional parameters url, userName, password and domain.   public Copy(string url, string userName, string password, string domain) { this.Url = url; this.Credentials = new System.Net.NetworkCredential(userName, password, domain); } 3) Add a namespace.     B) Wrapper Class Create a C# new library that references the Proxy Class.         C) Create SSIS SSIS solution is composed of:   1) Execute SQL Task, returns a single column rows containing the criteria. 2) Foreach Loop Container - loops per result from query (SQL Task) and creates a CSV file on a certain folder. 3) Script Task - calls the wrapper class to upload CSV files located on a certain folder to targer WSS Document Library Note: I've created another overload of CopyFiles that accepts a Directory Info instead of file location that loops thru the contents of the folder. Designer View Variable View

    Read the article

  • Windows Phone 7 ActiveSync error 86000C09 (My First Post!)

    - by Chris Heacock
    Hello fellow geeks! I'm kicking off this new blog with an issue that was a real nuisance, but was relatively easy to fix. During a recent Exchange 2003 to 2010 migration, one of the users was getting an error on his Windows Phone 7 device. The error code that popped up on the phone on every sync attempt was 86000C09 We tested the following: Different user on the same device: WORKED Problem user on a different device: FAILED   Seemed to point (conclusively) at the user's account as the crux of the issue. This error can come up if a user has too many devices syncing, but he had no other phones. We verified that using the following command: Get-ActiveSyncDeviceStatistics -Identity USERID Turns out, it was the old familiar inheritable permissions issue in Active Directory. :-/ This user was not an admin, nor had he ever been one. HOWEVER, his account was cloned from an ex-admin user, so the unchecked box stayed unchecked. We checked the box and voila, data started flowing to his device(s). Here's a refresher on enabling Inheritable permissions: Open ADUC, and enable Advanced Features: Then open properties and go to the Security tab for the user in question: Click on Advanced, and the following screen should pop up: Verify that "Include inheritable permissions from this object's parent" is *checked*.   You will notice that for certain users, this box keeps getting unchecked. This is normal behavior due to the inbuilt security of Active Directory. People that are in the following groups will have this flag altered by AD: Account Operators Administrators Backup Operators Domain Admins Domain Controllers Enterprise Admins Print Operators Read-Only Domain Controllers Replicator Schema Admins Server Operators Once the box is cheked, permissions will flow and the user will be set correctly. Even if the box is unchecked, they will function normally as they now has the proper permissions configured. You need to perform this same excercise when enabling users for Lync, but that's another blog. :-)   -Chris

    Read the article

  • Firefox for NTLM secured sites

    - by Sarang
    Spent the last weekend fighting to get firefox to connect to a sharepoint portal hosting my homegrown TFS instance's TFS/WEB and team project portal from a friends' place. Firefox is THE favourite browser and I was hating to se it fail miserably with NTLM authentication. Fun part is it showed the login prompt accepted credentials and like a pestering young puppy came back for the same credentials. After banging my head and various "I don't know what I don't know" attempts I decided to play god and entered the firefox's advance config mode. And voila! there sits a nifty little option called network.automatic-ntlm-auth.trusted-uris. Assign your URI to it and you are through. No more shameface before those Chrome/Opera users. :). To enter FireFox's god mode, open a new tab and type about:config in the address bar. There is a search box which certainly comes handy shifting through hundreds of options. The root lies in firefox's default mechanism of not allowing NTLM passthrough authentication. Firefox defaults to digest credentials which are blatantly refused by a web app expecting NTLM which results in authentication failure and Firefox keeps asking for credentials over and over again. The steps listed above are same as adding a website to your trusted sites' list in Internet Explorer.

    Read the article

  • 30 seconds from File|New to a new CRUD Silverlight application with Teleriks new LINQ Implementation

    Last month Telerik released its new LINQ implementation and last week we released the new Data Services Wizard for Telerik OpenAccess, which supports both traditional OpenAccess entities and the new LINQ implementation. I will a walk you through the process where you can connect to a database, add a new domain model, wrap it in a new WCF Data Services (Astoria) service, and add a CRUD enabled Silverlight application. All in 30 seconds! Step 1: Build your Domain Model (20 seconds) Open Visual Studio 2010 RTM (or 2008) and add a new ASP.NET project. Right click on the project and select Add|New Item and choose Telerk OpenAccess Domain Model from the item template list. The Visual Entity Designer wizard comes up. Select the database server you are using in the first screen (SQL Server, Oracle, SQL Azure, MySQL, etc) and then also build your database connection string. Next select the tables, views, and stored procedures you want ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • Serve web application error messages from Http server [closed]

    - by licorna
    I have nginx as a http server with tomcat as a backend (using proxy_pass). It works great but I want to define my own error pages (404, 500, etc.) and that they are served by nginx and not tomcat. For example I have the following resource: https://domain.com/resource which doesn't exist. If I [GET] that URL then I get a Not Found message from Tomcat and not from nginx. What I want is that every time Tomcat responds with a 404 (or any other error message) nginx sends itself a message to the user: some html file accessible by nginx. The way I have my nginx server configured is very easy, just: location / { proxy_pass http://localhost:8080/<webapp-name>/; } And I've configured port 8080, which is tomcat, as not accessible from outside this machine. I don't think that using different location directives in nginx configuration will work, because there are some resources that depend on the URL: https://domain.com/customer/<non-existent-customer-name>/[GET] Will always return 404 (or any other error message), while: https://domain.com/customer/<existent-customer>/[GET] Will return anything different from 404 (the customer exists). Is there any way of serving Tomcat (Application Server) error messages with Nginx (http Server)? To check the message sent by the proxy_pass directive and act upon it?

    Read the article

  • What kind of users stories should be written in the initial stages of a project?

    - by Domenic
    When just starting a project, you have nothing---no UI, no data layer, nothing in between. Thus, a single story like "users should be able to view their foos" will entail a lot of work. Once you have that story, one like "users should be able to edit their foos" is more realistic, but that first story will involve setting up a UI layer, a presentation logic layer, a domain logic layer, and a data access layer. This doesn't fit with my concept of "tasks": to me, I'd rather have something like the following "tasks": Show dummy data for a user's foos in HTML, derived from JavaScript objects. Set up a presentation logic layer, and connect the JavaScript objects to it. Set up a domain logic layer, and connect the presentation logic layer to it. Set up a data access layer, and connection the domain logic layer to it. Do all of these fall under the single "story" above? If so, I feel like stories are not a terribly useful framework in the early stages of a project. If so, that's fine---I just want to make sure I'm not missing something, since I'm really trying to learn this agile methodology as best I can.

    Read the article

  • Track a Adobe Flash app hosted on multiple domains with Google Analytics

    - by roberkules
    I'm working on a flash app that's gonna be distributed to more and more partners (and obviously domains). It needs to be tracked aggregated and also separately. I implemented Google Analytics using gaforflash, tracking virtual pageviews and events inside the flash app. What I want to achieve: View an aggregated report of all partners. Identify the partner not by the domain (where the flash is used), but by a partnerID. Each partner needs access to the report of his domain. (no admin rights needed) I came up with this solution: Using only one "Web property" in Google Analytics. UA-XXXXXX-4 .example.com Set a custom/virtual hostname per partner. (GA's "utmhn" parameter) partner1.example.com partner2.example.com Create a profile for each partner, setting the filter to include only the relevant "subdomain" Problems that came up: The gaforflash library doesn't support overriding the host name. Possible workaround: The gaforflash source code is available, so I could add the functionality. Any goal from the "master" profile is not copied to the partners profile. profile 1: include traffic from hostname ^partner1\. profile 2: include traffic from hostname ^partner2\. Is it (very) bad to fake the hostname? Are there better approaches? Or what improvements could you think of? UPDATE: I'm looking primarily for a solid data structure inside Google Analytics regardless of the flash implementation. The only limitations: We need an aggregated view across all partners Our partners need to have access to their subset of data We want to identify the partner by a custom partnerID, not the domain

    Read the article

  • Updating Banshee to 2.4

    - by Lucasguy11
    I have banshee 2.2.1 with Ubuntu 11.10 I have been trying to update banshee to 2.4 (released yesterday) but it just isnt working, I have been using sudo add-apt-repository ppa:banshee-team/ppa in terminal, from the Banshee.fm website. but after running through terminal it says this: sudo add-apt-repository ppa:banshee-team/ppa You are about to add the following PPA to your system: PPA for Banshee Team This PPA contains the latest stable debs of Banshee for Ubuntu. To install Banshee, you must first enable the PPA on your system: 1. Open Software Sources (System->Administration->Software Sources) 2. Navigate to the "Third Party Sources" tab. 3. Click "Add" 4. Enter the APT line below that corresponds to your Ubuntu version that starts with "deb". 5. Click "Add Source" 6. Click "Close" 7. It will prompt you to reload your software cache. Click "Reload". 8. Now install the package "banshee" from Synaptic, or using the command below: sudo apt-get install banshee For those who wish to compile from trunk, add the deb-src line and then run "sudo apt-get build-dep" to install all required dependencies before starting to compile. Unstable (version which have odd minor version numbers) debs of Banshee can be found here: https://launchpad.net/~banshee-team/+archive/banshee-unstable More info: https://launchpad.net/~banshee-team/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.OPAjxemDQr --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 9D2C2E0A3C88DD807EC787D74874D3686E80C6B7 gpg: requesting key 6E80C6B7 from hkp server keyserver.ubuntu.com gpg: key 6E80C6B7: "Launchpad PPA for Banshee Team" not changed gpg: Total number processed: 1 gpg: unchanged: 1 I believe I have the ppa but, im not sure. I need a step by step process to get this, ive been trying to figure it out for quite a while now...

    Read the article

  • no-www redirect not working / DNS A record

    - by HonzaB
    I'm far from an expert on this, but I'll try to be as clear as possible. I have an eshop solution leased from a company and it's hosted on their server. I can access it thru company.com/myshop and it also allows me to set up to 3 domains that they should recognize and redirect to my specific shop from. I registered a domain with a different company and am trying to "redirect" it to the eshop. By means of one of the following DNS entries (as they look in the admin GUI) * A 111.111.111.111 *.myshop.com A 111.111.111.111 myshop.com A 111.111.111.111 I've managed to make www.myshop.com redirect to the IP of company.com (111.111.111.111), which then goes on to do exactly what I expect it to do (ie. recognizes it comes from my domain, does some further redirects itself). However, I can't seem to make myshop.com (ie. without the www) redirect to that IP, too. The company that I registered the domain with provides a "URL redirect" service, but google would only register the redirect request and wouldn't follow it. That's why I hope for a DNS solution to this - my assumption being I've managed to miss adding a record to the DNS; if, however, the reason lies elsewhere, I'd be happy to hear about that too. If it's a search engine friendly solution (ie the www/no-www dilemma - avoidance of double content penalties), then that's even better; have no prefference either way (www/no-www), just need it to work. Any help is greatly appreciated, thanks

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • Problems after bumblebee installation

    - by Samuel
    I tried to install bumblebee on Ubuntu 12.04 LTS by following steps on ubuntuwiki site. But when i used this code: sudo add-apt-repository ppa:bumblebee/stable && sudo apt-get update this output came out: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.q0zzLiXVT3 --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 46C0364A882F14F899448FFCB22A95F88110A93A gpg: requesting key 8110A93A from hkp server keyserver.ubuntu.com gpg: key 8110A93A: "Launchpad PPA for Bumlebee Project" not changed gpg: Total number processed: 1 gpg: unchanged: 1 E: Type 'ain' is not known on line 3 in source list /etc/apt/sources.list.d/bumblebee-stable-precise.list E: The list of sources could not be read. There´s also the same problem message when I try to run the update center. ´E:Type´ain´ is not known on line 3 in source list /etc/apt/sources.list.d/bumblebee-stable-precise.list, E:The list of sources could not be read., E:The package lists or status file could not be parsed or opened.´ I don´t know what to do since I´m a newbie at Linux. Thanks in advance, Samuel.

    Read the article

  • Can I remove all-caps and shorten the disclaimer on my License?

    - by stefano palazzo
    I am using the MIT License for a particular piece of code. Now, this license has a big disclaimer in all-caps: THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF... ... I've seen a normally capitalised disclaimer on the zlib license (notice that it is above the license text), and even software with no disclaimer at all (which implies, i take it, that there is indeed a guarantee?), but i'd like some sourced advice by a trusted party. I just haven't found any. GNU's License notice for other files comes with this disclaimer: This file is offered as-is, without any warranty. Short and simple. My question therefore: Are there any trusted sources indicating that a short rather than long, and a normally spelled rather than capitalised disclaimer (or even one or the other) are safely usable in all of the jurisdictions I should be concerned with? If the answer turns out to be yes: Why not simply use the short license notice that the fsf proposes for readme-files and short help documents instead of the MIT License? Is there any evidence suggesting this short 'license' will not hold up? For the purposes of this question, the software is released in the European Union, should it make any difference.

    Read the article

  • Recommendations for managing DNS issues when hosting customer sites.

    - by Thomas
    I'm working at a company which primarily provides SaaS products but also will host some of our customers corporate websites. My question relates to recommendations for managing DNS for client's domain names. My objectives: Not restrict my ability to change the server's IP address such as might happen when I move my servers to a new host. Not have to contact the customer to change their domain's DNS if I need to change the server's IP address. Often times, customers lose this information or have to track down the one person with any knowledge of the domain settings. Map both .clientdomain.com and www.clientdomain.com to the proper IIS site. However, I'm running into a couple of common problems: Sometimes, the DNS console provided by the client's hosting company does not allow for CNAME records. Sometimes, the DNS console provided by the client's hosting company will not let me create a CNAME entry for .spiffydomain.com because the given hosting company has created a SOA record for that entry or simply requires that .spiffydomain.com be an A record. I believe one solution to #2 is to use a wildcard for a CNAME entry (i.e. *.spiffydomain.com). Is that correct? How do other folks that are hosting many customer's site manage change of DNS entries on their servers?

    Read the article

  • SSL Certificate Works in Monit - But Not in Keystore

    - by Bart Silverstrim
    I have a situation where there's a keystore file with the various root/intermediate certificates stored in it in a way that it seems to work for most browsers. Problem is that when mobile browsers hit it, there's a break in the chain and they complain. I used an SSL checker at http://www.sslshopper.com/ssl-checker.html and it states that "The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate." So...the desktop browsers must have the intermediate certs already and can make the chain connections, I'm assuming, while the mobile browsers can't. The thing is that I had used Portecle to export certificates from the keystore and cobble them together to create a .PEM certificate to run the Monit utility. When I check that application with the SSL checker, it works fine! The person that originally created the keystore said he couldn't follow the SSL provider's directions for creating the keystore because he created the CSR request using openssl, so the cert and private key had to be converted to DER format and use importkey to get it to work; following the directions he found online had importkey seem to use only a set keystore file as a result, and it would erase anything already in the file if it existed. So is there a way to take the certificate I created for Monit and create a working keystore for the Tomcat website? What would be causing the chain to be broken in the current keystore, but work for Monit? I have the SSL cert provider's intermediate and cross certificates, and the website's certificate, but is what else would I need to create a working chain of certs for a keystore?

    Read the article

  • Looking for Hosting Companies that Meet the Following Criteria [closed]

    - by Bryan Hadaway
    Possible Duplicate: How to find web hosting that meets my requirements? Please Note: This is not a subjective question and I am not looking for opinions. This is very much an objective question with legitimate use and purpose to identify hosts that offer the following: Multi Domain SSL Certificate Linux Server PHP5+ cPanel Unlimited Storage, Bandwidth, MySql DBs and Addon Domains SSL is mentioned first because this is most important. This is not a single domain or wildcard SSL cert. It's relatively new and unique. It's for the purpose of securing multiple domains on one account without having to have an entirely separate hosting account and SSL cert for every domain. I'm currently using BlueHost/HostMonster which meets all my criteria except for this special kind of SSL cert. Currently, HostGator is the only host that offers everything I've listed that I've been able to find. Again, I'm not requesting recommendations, advice or opinions of the best or most reputable service based on your experiences. I am asking for an objective list of known hosts that offer the aforementioned listed items only. Thereafter, I (and others who this will benefit) can make our comparisons and selection privately.

    Read the article

  • Change the Integrated Weblogic Port number

    - by pavan.pvj
    There came a situation where I wanted to work with two JDevelopers simultaneously and start two different applications in two JDEVs. (Both of them have to in separate installation location, else it will create a problem because of system directory).Now, when we want to start WLS in JDEV, only the first one will be started and the other one fails with an exception of port conflict. Until few days back, $1million dollar question was how to change the integrated WLS port number?So, heres the answer after some R&D. In the view menu, click on "Application Server Navigator". Right click on Integrated Weblogic server.1) If it is the first time that you are trying to start the server, then there is a menu "Create Default Domain". If you click on this, a window will be displayed where it asks for the preferred port number. Change it here.2) If the domain is already created, then click on Properties and change the preferred port number.Again, if you want to change the port before starting JDEV from the file system, then goto $JDEV_USER_HOME/systemxxx/o.j2ee and open the file adrs-instances.xml and change the http-port in the startup-preferences:<hash n="startup-preferences">   <value n="http-port" v="7111"/></hash>Note 1: adrs-instances.xml will be created ONLY after you create the default domain.Note 2: systemxxx - refers to system.<JDEV version> like system.11.1.1.3.56.59 for PS2.Note 3: $JDEV_USER_HOME - in windows - would be C:\Documents and Settings\[user_name]\Application Data\JDeveloper"Now, you can run multiple Integrated WLS simultaneously. But please be aware that running more than one WLS server will degrade system performance.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >