Search Results

Search found 5998 results on 240 pages for 'rise against'.

Page 26/240 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • What an SEO Expert Can Do For Their Clients

    In the last few years, the rise of SEO or Search Engine Optimization Companies has been on the upswing. These companies do not have to be co-located next to their clients as there can be a SEO expert in Wales servicing a client in Singapore or Malaysia.

    Read the article

  • SEO is Dead

    Search Engine Optimization (SEO) is a method businesses use to attempt to rise to the top of the results listings in Search Engines (SERPs). This has been a practice since search engines were invented for the Internet.

    Read the article

  • FTP through HAProxy

    - by Menda
    I have a machine, which is the Host and has HAProxy installed in it and working. Then I have a Guest KVM virtual machine running inside the Host with an IP 192.168.122.152. I installed an FTP server in the Guest machine with VSFTPD. From the Host, if I try the command $ ftp -p 192.168.122.152, works perfectly and I can connect to the Guest FTP. I need to remark that this FTP is configured as passive, but both passive and active connections are working from the Host. This is an extract of part of /etc/vsftpd.conf in the Guest: # Passive mode connect_from_port_20=NO tcp_wrappers=YES listen_address=192.168.122.152 pasv_enable=YES pasv_promiscuous=NO port_enable=YES port_promiscuous=NO pasv_max_port=10000 pasv_min_port=10250 Now it's time to make it accessible from outside, so I configure /etc/haproxy/haproxy.cfg like this: listen FTP_Default *:21 server ftp01 192.168.122.152 check port 21 inter 10s rise 1 fall 2 listen FTP_Range *:10000-10250 server ftp01 192.168.122.152 check port 21 inter 10s rise 1 fall 2 But if I try to connect from other machine in internet $ ftp -p $PUBLICIP, it only responds: Connected to <PUBLICIP>, but it doesn't ask for the login and password. Something in the HAProxy config must be wrong, because it's the only point where it fails. By the way, I tried to adapt my configuration to this one in this blog. Thanks.

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • An abundance of LINQ queries and expressions using both the query and method syntax.

    - by nikolaosk
    In this post I will be writing LINQ queries against an array of strings, an array of integers.Moreover I will be using LINQ to query an SQL Server database. I can use LINQ against arrays since the array of strings/integers implement the IENumerable interface. I thought it would be a good idea to use both the method syntax and the query syntax. There are other places on the net where you can find examples of LINQ queries but I decided to create a big post using as many LINQ examples as possible. We...(read more)

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

  • WPF Login Verification Using Active Directory

    - by psheriff
    Back in October of 2009 I created a WPF login screen (Figure 1) that just showed how to create the layout for a login screen. That one sample is probably the most downloaded sample we have. So in this blog post, I thought I would update that screen and also hook it up to show how to authenticate your user against Active Directory. Figure 1: Original WPF Login Screen I have updated not only the code behind for this login screen, but also the look and feel as shown in Figure 2. Figure 2: An Updated WPF Login Screen The UI To create the UI for this login screen you can refer to my October of 2009 blog post to see how to create the borderless window. You can then look at the sample code to see how I created the linear gradient brush for the background. There are just a few differences in this screen compared to the old version. First, I changed the key image and instead of using words for the Cancel and Login buttons, I used some icons. Secondly I added a text box to hold the Domain name that you wish to authenticate against. This text box is automatically filled in if you are connected to a network. In the Window_Loaded event procedure of the winLogin window you can retrieve the user’s domain name from the Environment.UserDomainName property. For example: txtDomain.Text = Environment.UserDomainName The ADHelper Class Instead of coding the call to authenticate the user directly in the login screen I created an ADHelper class. This will make it easier if you want to add additional AD calls in the future. The ADHelper class contains just one method at this time called AuthenticateUser. This method authenticates a user name and password against the specified domain. The login screen will gather the credentials from the user such as their user name and password, and also the domain name to authenticate against. To use this ADHelper class you will need to add a reference to the System.DirectoryServices.dll in .NET. The AuthenticateUser Method In order to authenticate a user against your Active Directory you will need to supply a valid LDAP path string to the constructor of the DirectoryEntry class. The LDAP path string will be in the format LDAP://DomainName. You will also pass in the user name and password to the constructor of the DirectoryEntry class as well. With a DirectoryEntry object populated with this LDAP path string, the user name and password you will now pass this object to the constructor of a DirectorySearcher object. You then perform the FindOne method on the DirectorySearcher object. If the DirectorySearcher object returns a SearchResult then the credentials supplied are valid. If the credentials are not valid on the Active Directory then an exception is thrown. C#public bool AuthenticateUser(string domainName, string userName,  string password){  bool ret = false;   try  {    DirectoryEntry de = new DirectoryEntry("LDAP://" + domainName,                                           userName, password);    DirectorySearcher dsearch = new DirectorySearcher(de);    SearchResult results = null;     results = dsearch.FindOne();     ret = true;  }  catch  {    ret = false;  }   return ret;} Visual Basic Public Function AuthenticateUser(ByVal domainName As String, _ ByVal userName As String, ByVal password As String) As Boolean  Dim ret As Boolean = False   Try    Dim de As New DirectoryEntry("LDAP://" & domainName, _                                 userName, password)    Dim dsearch As New DirectorySearcher(de)    Dim results As SearchResult = Nothing     results = dsearch.FindOne()     ret = True  Catch    ret = False  End Try   Return retEnd Function In the Click event procedure under the Login button you will find the following code that will validate the credentials that the user types into the login window. C#private void btnLogin_Click(object sender, RoutedEventArgs e){  ADHelper ad = new ADHelper();   if(ad.AuthenticateUser(txtDomain.Text,         txtUserName.Text, txtPassword.Password))    DialogResult = true;  else    MessageBox.Show("Unable to Authenticate Using the                      Supplied Credentials");} Visual BasicPrivate Sub btnLogin_Click(ByVal sender As Object, _ ByVal e As RoutedEventArgs)  Dim ad As New ADHelper()   If ad.AuthenticateUser(txtDomain.Text, txtUserName.Text, _                         txtPassword.Password) Then    DialogResult = True  Else    MessageBox.Show("Unable to Authenticate Using the                      Supplied Credentials")  End IfEnd Sub Displaying the Login Screen At some point when your application launches, you will need to display your login screen modally. Below is the code that you would call to display the login form (named winLogin in my sample application). This code is called from the main application form, and thus the owner of the login screen is set to “this”. You then call the ShowDialog method on the login screen to have this form displayed modally. After the user clicks on one of the two buttons you need to check to see what the DialogResult property was set to. The DialogResult property is a nullable type and thus you first need to check to see if the value has been set. C# private void DisplayLoginScreen(){  winLogin win = new winLogin();   win.Owner = this;  win.ShowDialog();  if (win.DialogResult.HasValue && win.DialogResult.Value)    MessageBox.Show("User Logged In");  else    this.Close();} Visual Basic Private Sub DisplayLoginScreen()  Dim win As New winLogin()   win.Owner = Me  win.ShowDialog()  If win.DialogResult.HasValue And win.DialogResult.Value Then    MessageBox.Show("User Logged In")  Else    Me.Close()  End IfEnd Sub Summary Creating a nice looking login screen is fairly simple to do in WPF. Using the Active Directory services from a WPF application should make your desktop programming task easier as you do not need to create your own user authentication system. I hope this article gave you some ideas on how to create a login screen in WPF. NOTE: You can download the complete sample code for this blog entry at my website: http://www.pdsa.com/downloads. Click on Tips & Tricks, then select 'WPF Login Verification Using Active Directory' from the drop down list. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Incorrect colour blending when using a pixel shader with XNA

    - by MazK
    I'm using XNA 4.0 to create a 2D game and while implementing a layer tinting pixel shader I noticed that when the texture's alpha value is anything between 1 or 0 the end result is different than expected. The tinting works from selecting a colour and setting the amount of tint. This is achieved via the shader which works out first the starting colour (for each r, g, b and a) : float red = texCoord.r * vertexColour.r; and then the final tinted colour : output.r = red + (tintColour.r - red) * tintAmount; The alpha value isn't tinted and is left as : output.a = texCoord.a * vertexColour.a; The picture in the link below shows different backdrops against an energy ball object where it's outer glow hasn't blended as I would like it to. The middle two are incorrect as the second non tinted one should not show a glow against a white BG and the third should be entirely invisible. The blending function is NonPremultiplied. Why the alpha value is interfering with the final colour?

    Read the article

  • SSIS Reporting Pack update

    - by jamiet
    Its been a while since I last posted anything in regard to SSIS Reporting Pack, the most recent release being on 27th May 2012, so here is a short update. There is still lots of work to do on SSIS Reporting Pack; lots more features to add, lots of performance work to be done, and a few bug fixes too. I have also been (fairly) hard at work on a framework to be used in conjunction with SSIS 2012 that I refer to as the Restart Framework (currently residing at http://ssisrestartframework.codeplex.com/). There is still much work to be done on the Restart Framework (not least some useful documentation on how to use it) which is why I haven’t mentioned it publicly before now although I am actively checking in changes. One thing I am considering is amalgamating the two projects into one; this would mean I could build a suite of reports that both work against the SSIS Catalog (what you currently know as “SSIS Reporting Pack”) and also against this Restart Framework thing. No decision has been made as yet though. There have been a number of bug reports and feature suggestions for SSIS Reporting Pack added to the Issue Tracker. Thank you to everyone that has submitted something, rest assured I am not going to ignore them forever; my time is at a premium right now unfortunately due to … well … life… so working on these items isn’t near the top of my priority list. Lastly, I am actively using SSIS Reporting Pack in a production environment right now and I’m happy to report that it is proving to be very useful. One of the reports that I have put a lot of time into is execution executable duration.rdl and its proving very adept at easily identifying bottlenecks in our SSIS 2012 executions: The report allows you to browse through the hierarchy of executables in each execution and each bar represents the duration of each executable in relation to all the other executables; longer bars being a good indication of where problems might lie. The colour of the bar indicates whether it was successful or not (green=success). Hovering over a bar brings up a tooltip showing more information about that executable. Clicking on a bar allows you to compare this particular instance of the executable against other executions. Please do let me know if you are using SSIS Reporting Pack. I would like to hear any anecdotes you might have, good or bad. @Jamiet

    Read the article

  • Oracle Response to Apache Departure from JCP

    - by Henrik Ståhl
    Last month Oracle renominated Apache to the Java Executive Committee because we valued their active participation and perspective on Java. Earlier this week, by an overwhelming majority, the Java Executive Committee voted to move Java forward by formally initiating work on both Java SE 7 and SE 8 based on their technical merits. Apache voted against initiating technical committee work on both SE 7 and SE 8, effectively voting against moving Java forward. Now, despite supporting the technical direction, Apache have announced that they are quitting the Executive Committee. Oracle has a responsibility to move Java forward and to maintain the uniformity of the Java standard for the millions of Java developers and the majority of Executive Committee members agree. We encourage Apache to reconsider its position and remain a part of the process to move Java forward. ASF and many open source projects within it are an important part of the overall Java ecosystem. Adam Messinger, Vice President of Development

    Read the article

  • Is SHA-1 secure for password storage?

    - by Tgr
    Some people throw around remarks like "SHA-1 is broken" a lot, so I'm trying to understand what exactly that means. Let's assume I have a database of SHA-1 password hashes, and an attacker whith a state of the art SHA-1 breaking algorithm and a botnet with 100,000 machines gets access to it. (Having control over 100k home computers would mean they can do about 10^15 operations per second.) How much time would they need to find out the password of any one user? find out the password of a given user? find out the password of all users? find a way to log in as one of the users? find a way to log in as a specific user? How does that change if the passwords are salted? Does the method of salting (prefix, postfix, both, or something more complicated like xor-ing) matter? Here is my current understanding, after some googling. Please correct in the answers if I misunderstood something. If there is no salt, a rainbow attack will immediately find all passwords (except extremely long ones). If there is a sufficiently long random salt, the most effective way to find out the passwords is a brute force or dictionary attack. Neither collision nor preimage attacks are any help in finding out the actual password, so cryptographic attacks against SHA-1 are no help here. It doesn't even matter much what algorithm is used - one could even use MD5 or MD4 and the passwords would be just as safe (there is a slight difference because computing a SHA-1 hash is slower). To evaluate how safe "just as safe" is, let's assume that a single sha1 run takes 1000 operations and passwords contain uppercase, lowercase and digits (that is, 60 characters). That means the attacker can test 1015*60*60*24 / 1000 ~= 1017 potential password a day. For a brute force attack, that would mean testing all passwords up to 9 characters in 3 hours, up to 10 characters in a week, up to 11 characters in a year. (It takes 60 times as much for every additional character.) A dictionary attack is much, much faster (even an attacker with a single computer could pull it off in hours), but only finds weak passwords. To log in as a user, the attacker does not need to find out the exact password; it is enough to find a string that results in the same hash. This is called a first preimage attack. As far as I could find, there are no preimage attacks against SHA-1. (A bruteforce attack would take 2160 operations, which means our theoretical attacker would need 1030 years to pull it off. Limits of theoretical possibility are around 260 operations, at which the attack would take a few years.) There are preimage attacks against reduced versions of SHA-1 with negligible effect (for the reduced SHA-1 which uses 44 steps instead of 80, attack time is down from 2160 operations to 2157). There are collision attacks against SHA-1 which are well within theoretical possibility (the best I found brings the time down from 280 to 252), but those are useless against password hashes, even without salting. In short, storing passwords with SHA-1 seems perfectly safe. Did I miss something?

    Read the article

  • Dangerous programming

    - by benhowdle89
    Ok, i'm talking pure software/web, i'm not on about code to power Life Support machines or NASA rockets. In terms of software/web development what is the most dangerous single piece of code someone could put into a program (say if they had a grudge against a client/employee) In PHP, the first thing that comes to mind is some sort of file deletion: function EmptyDir($dir) { $handle=opendir($dir); while (($file = readdir($handle))!==false) { echo "$file <br>"; @unlink($dir.'/'.$file); } closedir($handle); } EmptyDir('images'); Or a PHP script that takes a user's sensitive input and posts it to Google sitemap or something? I hope this doesnt get closed off as subjective as there surely must be a ranking order of dangerous code. So i'm asking for the No.1 spot :) DISCLAIMER: I have no grudges against anyone, just curious for the answer!

    Read the article

  • Should I reuse variables?

    - by IAdapter
    Should I reuse variables? I know that many best practice say you should not do it, however later when different developer is debugging the code and have 3 variables that look a like and only difference is that they are created in different places in the code he might be confused. unit-testing is a great example of this. However I do know that best practice are most of the time against it. For example they say not to "overide" method parameters. Best practice are even are against nulling the previous variables (in Java there is Sonar that has warning when you assign null to variable that you don't need to do it to call garbage collector since Java6. you cant always control what warnings are turned off, most of the time the default is on)

    Read the article

  • Free SEO Analysis using IIS SEO Toolkit

    In my spare time Ive been thinking about new ideas for the SEO Toolkit, and it occurred to me that rather than continuing trying to figure out more reports and better diagnostics against some random fake sites, that it could be interesting to ask openly for anyone that is wanting a free SEO analysis report of your site and test drive some of it against real sites. So what is in it for you, I will analyze your site to look for common SEO errors, I will create a digest of actions to do and other...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Free SEO Analysis using IIS SEO Toolkit

    In my spare time Ive been thinking about new ideas for the SEO Toolkit, and it occurred to me that rather than continuing trying to figure out more reports and better diagnostics against some random fake sites, that it could be interesting to ask openly for anyone that is wanting a free SEO analysis report of your site and test drive some of it against real sites. So what is in it for you, I will analyze your site to look for common SEO errors, I will create a digest of actions to do and other...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What are the downsides to using dependency injection?

    - by kerry
    I recently came across an interesting question on stack overflow with some interesting reponses.  I like this post for three reasons. First, I am a big fan of dependency injection, it forces you to decouple your code, create cohesive interfaces, and should result in testable classes. Second, the author took the approach I usually do when trying to evaluate a technique or technology; suspend personal feelings and try to find some compelling arguments against it. Third, it proved that it is very difficult to come up with a compelling argument against dependency injection. What are the downsides to using dependency injection?

    Read the article

  • How I might think like a hacker so that I can anticipate security vulnerabilities in .NET or Java before a hacker hands me my hat [closed]

    - by Matthew Patrick Cashatt
    Premise I make a living developing web-based applications for all form-factors (mobile, tablet, laptop, etc). I make heavy use of SOA, and send and receive most data as JSON objects. Although most of my work is completed on the .NET or Java stacks, I am also recently delving into Node.js. This new stack has got me thinking that I know reasonably well how to secure applications using known facilities of .NET and Java, but I am woefully ignorant when it comes to best practices or, more importantly, the driving motivation behind the best practices. You see, as I gain more prominent clientele, I need to be able to assure them that their applications are secure and, in order to do that, I feel that I should learn to think like a malevolent hacker. What motivates a malevolent hacker: What is their prime mover? What is it that they are most after? Ultimately, the answer is money or notoriety I am sure, but I think it would be good to understand the nuanced motivators that lead to those ends: credit card numbers, damning information, corporate espionage, shutting down a highly visible site, etc. As an extension of question #1--but more specific--what are the things most likely to be seeked out by a hacker in almost any application? Passwords? Financial info? Profile data that will gain them access to other applications a user has joined? Let me be clear here. This is not judgement for or against the aforementioned motivations because that is not the goal of this post. I simply want to know what motivates a hacker regardless of our individual judgement. What are some heuristics followed to accomplish hacker goals? Ultimately specific processes would be great to know; however, in order to think like a hacker, I would really value your comments on the broader heuristics followed. For example: "A hacker always looks first for the low-hanging fruit such as http spoofing" or "In the absence of a CAPTCHA or other deterrent, a hacker will likely run a cracking script against a login prompt and then go from there." Possibly, "A hacker will try and attack a site via Foo (browser) first as it is known for Bar vulnerability. What are the most common hacks employed when following the common heuristics? Specifics here. Http spoofing, password cracking, SQL injection, etc. Disclaimer I am not a hacker, nor am I judging hackers (Heck--I even respect their ingenuity). I simply want to learn how I might think like a hacker so that I may begin to anticipate vulnerabilities before .NET or Java hands me a way to defend against them after the fact.

    Read the article

  • Finding the lowest average Hamming distance when the order of the strings matter

    - by user1049697
    I have a sequence of binary strings that I want to find a match for among a set of longer sequences of binary strings. A match means that the compared sequence gives the lowest average Hamming distance when all elements in the shorter sequence have been matched against a sequence in one of the longer sets. Let me try to explain with an example. I have a set of video frames that have been hashed using a perceptual hashing algorithm so that the video frames that look the same has roughly the same hash. I want to match a short video clip against a set of longer videos, to see if the clip is contained in one of these. This means that I need to find out where the sequence of the hashed frames in the short video has the lowest average Hamming distance when compared with the long videos. The short video is the sub strings Sub1, Sub2 and Sub3, and I want to match them against the hashes of the long videos in Src. The clue here is that the strings need to match in the specific order that they are given in, e.g. that Sub1 always has to match the element before Sub2, and Sub2 always has to match the element before Sub3. In this example it would map thusly: Sub1-Src3, Sub2-Src4 and Sub3-Src5. So the question is this: is there an algorithm for finding the lowest average Hamming distance when the order of the elements compared matter? The naïve approach to compare the substring sequence to every source string won't cut it of course, so I need something that preferably can match a (much) shorter sub string to a set of million of elements. I have looked at MVP-trees, BK-trees and similar, but everything seems to only take into account one binary string and not a sequence of them. Sub1: 100111011111011101 Sub2: 110111000010010100 Sub3: 111111010110101101 Src1: 001011010001010110 Src2: 010111101000111001 Src3: 101111001110011101 Src4: 010111100011010101 Src5: 001111010110111101 Src6: 101011111111010101 I have added a calculation of the examples below. (The Hamming distances aren't correct, but it doesn't matter) **Run 1.** dist(Sub1, Src1) = 8 dist(Sub2, Src2) = 10 dist(Sub3, Src3) = 12 average = 10 **Run 2.** dist(Sub1, Src2) = 10 dist(Sub2, Src3) = 12 dist(Sub3, Src4) = 10 average = 11 **Run 3.** dist(Sub1, Src3) = 7 dist(Sub2, Src4) = 6 dist(Sub3, Src5) = 10 average = 8 **Run 4.** dist(Sub1, Src3) = 10 dist(Sub2, Src4) = 4 dist(Sub3, Src5) = 2 average = 5 So the winner here is sequence 4 with an average distance of 5.

    Read the article

  • Uninstalling with Ubuntu Software Center doesn't work on Ubuntu 12.04.1 64bit

    - by likethesky
    Not sure if I'm doing something wrong, or if the .deb package I'm installing is broken in some way (I've built it, using NetBeans 7.2), or if indeed this is a bug in Software Center. When I install this particular 32-bit .deb on Ubuntu 10.04 LTS--all updates applied--(where it was built), GDebi shows it and has an 'Uninstall' button next to it. So it works fine to uninstall it there, via the GDebi GUI. However, when I install it on 12.04.1 LTS--all updates applied--it installs fine, but then does not show up in Ubuntu Software Center as available to be uninstalled. No combination of searching finds it. However, I can from the command line, do sudo apt-get purge javafxapplication1 and it finds it and deletes it. The same thing happens when I build a 64-bit .deb and attempt to install it to the same (64-bit AMD) or a different 64-bit Ubuntu 12.04.1 system. So it seems to be isolated to this NetBeans-generated .deb and the 64-bit AMD build (though I haven't tried it on a 32-bit 12.04.1 install yet). These are all on VirtualBox VMs, btw, if that matters. Any way to clean up my Software Center and see if it's something I've done to get it in this state? Could this behavior be due to how this particular .deb has been built? (It doesn't have an 'Installed-Size' control field, so I do get the "Package is of bad quality" warning when I install it--which I do by clicking 'Ignore and install' button.) If you want all the gory details about why this happening--a bug has been reported against NetBeans for this behavior here: http://javafx-jira.kenai.com/browse/RT-25486 (EDIT: Just to be clear, the app installs fine, runs fine, all works as intended--I just can't get that 'bad package' message to go away, and now... I also can't uninstall it via Software Center, but rather, need to use sudo apt-get purge to uninstall it, after it installs.) Thanks for any pointers. I'm happy to report this as a bug against Ubuntu Software Center/Centre too, if that's what it seems to be, just tell me where to do so (a link). I'm a relative Ubuntu, NetBeans, and JavaFX newbie, though a long-time programmer. If I report it as a bug, I'll try it on the 32-bit build of 12.04.1 as well. Also, if I should add any more detail to the bug reported against NetBeans above, let me know--or feel free to add it yourself to the bug report above, if you would like. Thanks again!

    Read the article

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

  • Hide admin menu if no admin option is available

    - by Jorge
    If you have a menu "Admin tasks" and different admin tasks (like 10) that you could separately assign to each user, but there are users who don't have any admin tasks, how would you deal with "Hiding admin menu" for those users? I was thinking of 3 ways: 1) Javascript, check if Admin menu is empty and then hide it. 2) Check for all permissions in Admin menu, with a counter, and show it if counter 0. And then also re-check the permissions for each item to show. 3) Save all permissions in associative array. Test all and assign ' true' to granted items. When building the menu, have a function that tests if there is at least one permission granted. I wouldn't need to re-check permissions against DB, just against the array for each item. Is there any better way?

    Read the article

  • Unit Testing with NUnit and Moles Redux

    - by João Angelo
    Almost two years ago, when Moles was still being packaged alongside Pex, I wrote a post on how to run NUnit tests supporting moled types. A lot has changed since then and Moles is now being distributed independently of Pex, but maintaining support for integration with NUnit and other testing frameworks. For NUnit the support is provided by an addin class library (Microsoft.Moles.NUnit.dll) that you need to reference in your test project so that you can decorate yours tests with the MoledAttribute. The addin DLL must also be placed in the addins folder inside the NUnit installation directory. There is however a downside, since Moles and NUnit follow a different release cycle and the addin DLL must be built against a specific NUnit version, you may find that the release included with the latest version of Moles does not work with your version of NUnit. Fortunately the code for building the NUnit addin is supplied in the archive (moles.samples.zip) that you can found in the Documentation folder inside the Moles installation directory. By rebuilding the addin against your specific version of NUnit you are able to support any version. Also to note that in Moles 0.94.51023.0 the addin code did not support the use of TestCaseAttribute in your moled tests. However, if you need this support, you need to make just a couple of changes. Change the ITestDecorator.Decorate method in the MolesAddin class: Test ITestDecorator.Decorate(Test test, MemberInfo member) { SafeDebug.AssumeNotNull(test, "test"); SafeDebug.AssumeNotNull(member, "member"); bool isTestFixture = true; isTestFixture &= test.IsSuite; isTestFixture &= test.FixtureType != null; bool hasMoledAttribute = true; hasMoledAttribute &= !SafeArray.IsNullOrEmpty( member.GetCustomAttributes(typeof(MoledAttribute), false)); if (!isTestFixture && hasMoledAttribute) { return new MoledTest(test); } return test; } Change the Tests property in the MoledTest class: public override System.Collections.IList Tests { get { if (this.test.Tests == null) { return null; } var moled = new List<Test>(this.test.Tests.Count); foreach (var test in this.test.Tests) { moled.Add(new MoledTest((Test)test)); } return moled; } } Disclaimer: I only tested this implementation against NUnit 2.5.10.11092 version. Finally you just need to run the NUnit console runner through the Moles runner. A quick example follows: moles.runner.exe [Tests.dll] /r:nunit-console.exe /x86 /args:[NUnitArgument1] /args:[NUnitArgument2]

    Read the article

  • HAproxy with MySQL Master-Master Replication incredibly slow

    - by Yayap
    I have two MySQL servers in multi-master mode, with an HAproxy machine for simple load balancing/redundancy. When I am connected to one of the servers directly and try to update about 100,000 entries, it is completed including replication in about half a minute. When connecting through the proxy it takes usually over three whole minutes. Is it normal to have that type of latency? Is something amiss with my proxy configuration (included below)? This is getting really frustrating as I assumed the proxy would do some sort of load balancing, or at least have little to no overhead. #--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #--------------------------------------------------------------------- #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 # chroot /var/lib/haproxy # pidfile /var/run/haproxy.pid maxconn 4096 user haproxy group haproxy daemon #debug #quiet # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode tcp log global #option tcplog option dontlognull option tcp-smart-accept option tcp-smart-connect #option http-server-close #option forwardfor except 127.0.0.0/8 #option redispatch retries 3 #timeout http-request 10s #timeout queue 1m timeout connect 400 timeout client 500 timeout server 300 #timeout http-keep-alive 10s #timeout check 10s maxconn 2000 listen mysql-cluster 0.0.0.0:3306 mode tcp balance roundrobin option tcpka option httpchk server db01 192.168.15.118:3306 weight 1 inter 1s rise 1 fall 1 server db02 192.168.15.119:3306 weight 1 inter 1s rise 1 fall 1

    Read the article

  • *Un*installing with Ubuntu Software Center (Centre) doesn't work on 64-bit 12.04.1

    - by likethesky
    Not sure if I'm doing something wrong, or if the .deb package I'm installing is broken in some way (I've built it, using NetBeans 7.2), or if indeed this is a bug in Software Center. When I install this particular 32-bit .deb on Ubuntu 10.04 LTS--all updates applied--(where it was built), GDebi shows it and has an 'Uninstall' button next to it. So it works fine to uninstall it there, via the GDebi GUI. However, when I install it on 12.04.1 LTS--all updates applied--it installs fine, but then does not show up in Ubuntu Software Center as available to be uninstalled. No combination of searching finds it. However, I can from the command line, do sudo apt-get purge javafxapplication1 and it finds it and deletes it. The same thing happens when I build a 64-bit .deb and attempt to install it to the same (64-bit AMD) or a different 64-bit Ubuntu 12.04.1 system. So it seems to be isolated to this NetBeans-generated .deb and the 64-bit AMD build (though I haven't tried it on a 32-bit 12.04.1 install yet). These are all on VirtualBox VMs, btw, if that matters. Any way to 'clean up' my Software Center and see if it's something I've done to get it in this state? Could this behavior be due to how this particular .deb has been built? (It doesn't have an 'Installed-Size' control field, so I do get the "Package is of bad quality" warning when I install it--which I do by clicking 'Ignore and install' button.) If you want all the gory details about why this happening--a bug has been reported against NetBeans for this behavior here: http://javafx-jira.kenai.com/browse/RT-25486 (EDIT: Just to be clear, the app installs fine, runs fine, all works as intended--I just can't get that 'bad package' message to go away, and now... I also can't uninstall it via Software Center, but rather, need to use sudo apt-get purge to uninstall it, after it installs. /END EDIT) Thanks for any pointers. I'm happy to report this as a bug against Ubuntu Software Center/Centre too, if that's what it seems to be, just tell me where to do so (a link). I'm a relative Ubuntu, NetBeans, and JavaFX newbie, though a long-time programmer. If I report it as a bug, I'll try it on the 32-bit build of 12.04.1 as well. Also, if I should add any more detail to the bug reported against NetBeans above, let me know--or feel free to add it yourself to the bug report above, if you would like. Thanks again!

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >