Search Results

Search found 90601 results on 3625 pages for 'user friendly'.

Page 493/3625 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • Intermittent 403 errors when using allow to limit access to url with both explicit IP and SetEnvIf

    - by rbieber
    We are running Apache 2.2.22 on a Solaris 10 environment. We have a specific URL that we want to limit access to by IP. We recently implemented a CDN and now have the added complexity that the IP's that a request are shown to be coming from are actually the CDN servers and not the ultimate end user. In the case that we need to back the CDN out, we want to handle the case where either the CDN is forwarding the request, or the ultimate client is sending the request directly. The CDN sends the end user IP address in an HTTP header (for this scenario that header is called "User-IP"). Here is the configuration that we have put in place: SetEnvIf User-IP (\d+\.\d+\.\d+\.\d+) REAL_USER_IP=$1 SetEnvIf REAL_USER_IP "(10\.1\.2\.3|192\.168\..+)" access_allowed=1 <Location /uri/> Order deny,allow Allow from 10.1.2.3 192.168. allow from env=access_allowed Deny from all </Location> This seems to work fine for a time, however at some point the web server starts serving 403 errors to the end user - so for some reason it is restricting access. The odd thing is that a bounce of the web server seems to resolve the issue, but only for a time - then the behavior comes back. It might be worthwhile to note as well that this URL is delegated to a JBoss server via mod_jk. The denial of access is, however; confirmed to be at the Apache layer and the issue only seems to happen after the server has been running for some time.

    Read the article

  • permissions on upload folder not working

    - by Camran
    I have a php script which uploads images to a folder. I have these permissions on the upload folder: drwxrwxr-- 4 user user 4096 2010-06-02 16:20 temp_images Shouldn't these permissions be enough for files to be uploaded to the folder? But this doesn't work. It only works when I set the permissions to 777. "user" is added to the www-data group, still no luck. Any ideas why?

    Read the article

  • How to disable multiple form submit (POST) in IIS

    - by user1209640
    We had a major SharePoint outage a few months back because a user wedged their keyboard in such a way as to cause the Enter button to be pressed indefinitely. The user was on a customized people search page and hundreds of POSTs by the same user were submitted asynchronously, which overloaded the server. Because I work in a large organization, I am looking for a more global way to prevent this from happening. Is there a way to prevent multiple web form submissions by a common user within a short period of time within IIS? I am aware we can write javascript to disable the button after it is clicked, but we are hoping to prevent this issue from occurring on other pages where a similar possibility may exist. Update: It appears looking at the source code, the javascript is performing a document.location = url, whenever keycode 13 (Enter) is pressed. Again, we can write JS to prevent this in this location, but we also want to be able to guard against this kind of issue more generally... preferably at the IIS level.

    Read the article

  • Using xsl:variable in a xsl:foreach select statment

    - by Nefariousity
    I'm trying to iterate through an xml document using xsl:foreach but I need the select=" " to be dynamic so I'm using a variable as the source. Here's what I've tried: ... <xsl:template name="SetDataPath"> <xsl:param name="Type" /> <xsl:variable name="Path_1">/Rating/Path1/*</xsl:variable> <xsl:variable name="Path_2">/Rating/Path2/*</xsl:variable> <xsl:if test="$Type='1'"> <xsl:value-of select="$Path_1"/> </xsl:if> <xsl:if test="$Type='2'"> <xsl:value-of select="$Path_2"/> </xsl:if> <xsl:template> ... <!-- Set Data Path according to Type --> <xsl:variable name="DataPath"> <xsl:call-template name="SetDataPath"> <xsl:with-param name="Type" select="/Rating/Type" /> </xsl:call-template> </xsl:variable> ... <xsl:for-each select="$DataPath"> ... The foreach threw an error stating: "XslTransformException - To use a result tree fragment in a path expression, first convert it to a node-set using the msxsl:node-set() function." When I use the msxsl:node-set() function though, my results are blank. I'm aware that I'm setting $DataPath to a string, but shouldn't the node-set() function be creating a node set from it? Am I missing something? When I don't use a variable: <xsl:for-each select="/Rating/Path1/*"> I get the proper results. Here's the XML data file I'm using: <Rating> <Type>1</Type> <Path1> <sarah> <dob>1-3-86</dob> <user>Sarah</user> </sarah> <joe> <dob>11-12-85</dob> <user>Joe</user> </joe> </Path1> <Path2> <jeff> <dob>11-3-84</dob> <user>Jeff</user> </jeff> <shawn> <dob>3-5-81</dob> <user>Shawn</user> </shawn> </Path2> </Rating> My question is simple, how do you run a foreach on 2 different paths?

    Read the article

  • FOSUserBundle override mapping to remove need for username

    - by musoNic80
    I want to remove the need for a username in the FOSUserBundle. My users will login using an email address only and I've added real name fields as part of the user entity. I realised that I needed to redo the entire mapping as described here. I think I've done it correctly but when I try to submit the registration form I get the error: "Only field names mapped by Doctrine can be validated for uniqueness." The strange thing is that I haven't tried to assert a unique constraint to anything in the user entity. Here is my full user entity file: <?php // src/MyApp/UserBundle/Entity/User.php namespace MyApp\UserBundle\Entity; use FOS\UserBundle\Model\User as BaseUser; use Doctrine\ORM\Mapping as ORM; use Symfony\Component\Validator\Constraints as Assert; /** * @ORM\Entity * @ORM\Table(name="depbook_user") */ class User extends BaseUser { /** * @ORM\Id * @ORM\Column(type="integer") * @ORM\GeneratedValue(strategy="AUTO") */ protected $id; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your first name.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) */ protected $firstName; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your last name.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) */ protected $lastName; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your email address.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) * @Assert\Email(groups={"Registration"}) */ protected $email; /** * @ORM\Column(type="string", length=255, name="email_canonical", unique=true) */ protected $emailCanonical; /** * @ORM\Column(type="boolean") */ protected $enabled; /** * @ORM\Column(type="string") */ protected $salt; /** * @ORM\Column(type="string") */ protected $password; /** * @ORM\Column(type="datetime", nullable=true, name="last_login") */ protected $lastLogin; /** * @ORM\Column(type="boolean") */ protected $locked; /** * @ORM\Column(type="boolean") */ protected $expired; /** * @ORM\Column(type="datetime", nullable=true, name="expires_at") */ protected $expiresAt; /** * @ORM\Column(type="string", nullable=true, name="confirmation_token") */ protected $confirmationToken; /** * @ORM\Column(type="datetime", nullable=true, name="password_requested_at") */ protected $passwordRequestedAt; /** * @ORM\Column(type="array") */ protected $roles; /** * @ORM\Column(type="boolean", name="credentials_expired") */ protected $credentialsExpired; /** * @ORM\Column(type="datetime", nullable=true, name="credentials_expired_at") */ protected $credentialsExpiredAt; public function __construct() { parent::__construct(); // your own logic } /** * @return string */ public function getFirstName() { return $this->firstName; } /** * @return string */ public function getLastName() { return $this->lastName; } /** * Sets the first name. * * @param string $firstname * * @return User */ public function setFirstName($firstname) { $this->firstName = $firstname; return $this; } /** * Sets the last name. * * @param string $lastname * * @return User */ public function setLastName($lastname) { $this->lastName = $lastname; return $this; } } I've seen various suggestions about this but none of the suggestions seem to work for me. The FOSUserBundle docs are very sparse about what must be a very common request.

    Read the article

  • how get validation messages from mangomapper using rails console ?

    - by Alex
    Hi, I am basically teaching myself how to use RoR and MongoDB at the same time. I am following the very good book / tutorial : http://railstutorial.org/ I decided to replace Sqlite3 by MongoDB using the mongomapper gem. Everything works out about alright, but I am having some non-blocking little issues that I truly wish I could get rid of. In chapter 6, when working with validation I got 2 issues: - I don't know how to get the validations messages back like with Sqlite3. The "standard" code is: $ rails console --sandbox >> user = User.new(:name => "", :email => "[email protected]") >> user.save => false >> user.valid? => false >> user.errors.full_messages => ["Name can't be blank"] but if I try to do the same with MongoMapper, it throws an error saying that errors is undefined function. So does it mean that this is simply not implemented in mongomapper / mongo driver ? Or is there some other clever way to do this that I could not figure ? Additional, 2 things here: - I following the exemple in the book to the line, so I was expecting to be able to use the console in sandbox mode, but apparently that does not work either: (...)ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/console/sandbox.rb:1:in `<top (required)>': uninitialized constant ActiveRecord (NameError) from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/application.rb:226:in `initialize_console' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/application.rb:153:in `load_console' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/commands/console.rb:26:in `start' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/commands/console.rb:8:in `start' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/commands.rb:23:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' Also, in the book they call "user" but I need to call "User" (note the capital U) why is that ? Is it like mangomapper does not follow the Ruby naming convention or something ? And finally, I am trying to validate the field email with a regex as shown in the tutorial. It does not throws any errors at the code, but whenever I try to insert it just won't ever accept it unless I comment out the :format option... class User include MongoMapper::Document key :name, String, :required => true, :length => { :maximum => 50 } key :email, String, :required => true, # :format => { :with => email_regex }, :uniqueness => { :case_sentitive => false} timestamps! end Any advices you can provide on those topics would help me a lot ! Thanks, Alex

    Read the article

  • just can't get a controller to work

    - by Asaf
    I try to get into mysite/user so that application/classes/controller/user.php should be working, now this is my file tree: code of controller/user.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_User extends Controller_Default { public $template = 'user'; function action_index() { //$view = View::factory('user'); //$view->render(TRUE); $this->template->message = 'hello, world!'; } } ?> code of controller/default.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_default extends Controller_Template { } bootstrap.php: <?php defined('SYSPATH') or die('No direct script access.'); //-- Environment setup -------------------------------------------------------- /** * Set the default time zone. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/timezones */ date_default_timezone_set('America/Chicago'); /** * Set the default locale. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/setlocale */ setlocale(LC_ALL, 'en_US.utf-8'); /** * Enable the Kohana auto-loader. * * @see http://kohanaframework.org/guide/using.autoloading * @see http://php.net/spl_autoload_register */ spl_autoload_register(array('Kohana', 'auto_load')); /** * Enable the Kohana auto-loader for unserialization. * * @see http://php.net/spl_autoload_call * @see http://php.net/manual/var.configuration.php#unserialize-callback-func */ ini_set('unserialize_callback_func', 'spl_autoload_call'); //-- Configuration and initialization ----------------------------------------- /** * Initialize Kohana, setting the default options. * * The following options are available: * * - string base_url path, and optionally domain, of your application NULL * - string index_file name of your index file, usually "index.php" index.php * - string charset internal character set used for input and output utf-8 * - string cache_dir set the internal cache directory APPPATH/cache * - boolean errors enable or disable error handling TRUE * - boolean profile enable or disable internal profiling TRUE * - boolean caching enable or disable internal caching FALSE */ Kohana::init(array( 'base_url' => '/mysite/', 'index_file' => FALSE, )); /** * Attach the file write to logging. Multiple writers are supported. */ Kohana::$log->attach(new Kohana_Log_File(APPPATH.'logs')); /** * Attach a file reader to config. Multiple readers are supported. */ Kohana::$config->attach(new Kohana_Config_File); /** * Enable modules. Modules are referenced by a relative or absolute path. */ Kohana::modules(array( 'auth' => MODPATH.'auth', // Basic authentication 'cache' => MODPATH.'cache', // Caching with multiple backends 'codebench' => MODPATH.'codebench', // Benchmarking tool 'database' => MODPATH.'database', // Database access 'image' => MODPATH.'image', // Image manipulation 'orm' => MODPATH.'orm', // Object Relationship Mapping 'pagination' => MODPATH.'pagination', // Paging of results 'userguide' => MODPATH.'userguide', // User guide and API documentation )); /** * Set the routes. Each route must have a minimum of a name, a URI and a set of * defaults for the URI. */ Route::set('default', '(<controller>(/<action>(/<id>)))') ->defaults(array( 'controller' => 'welcome', 'action' => 'index', )); /** * Execute the main request. A source of the URI can be passed, eg: $_SERVER['PATH_INFO']. * If no source is specified, the URI will be automatically detected. */ echo Request::instance() ->execute() ->send_headers() ->response; ?> .htaccess: RewriteEngine On RewriteBase /mysite/ RewriteRule ^(application|modules|system) - [F,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] Trying to go to http://localhost/ makes the "hello world" page, from the welcome.php Trying to go to http://localhost/mysite/user give me this: The requested URL /mysite/user was not found on this server.

    Read the article

  • ASP.net roles and Projects

    - by Zyphrax
    EDIT - Rewrote my original question to give a bit more information Background info At my work I'm working on a ASP.Net web application for our customers. In our implementation we use technologies like Forms authentication with MembershipProviders and RoleProviders. All went well until I ran into some difficulties with configuring the roles, because the roles aren't system-wide, but related to the customer accounts and projects. I can't name our exact setup/formula, because I think our company wouldn't approve that... What's a customer / project? Our company provides management information for our customers on a yearly (or other interval) basis. In our systems a customer/contract consists of: one Account: information about the Company per Account, one or more Products: the bundle of management information we'll provide per Product, one or more Measurements: a period of time, in which we gather and report the data Extranet site setup Eventually we want all customers to be able to access their management information with our online system. The extranet consists of two sites: Company site: provides an overview of Account information and the Products Measurement site: after selecting a Measurement, detailed information on that period of time The measurement site is the most interesting part of the extranet. We will create submodules for new overviews, reports, managing and maintaining resources that are important for the research. Our Visual Studio solution consists of a number of projects. One web application named Portal for the basis. The sites and modules are virtual directories within that application (makes it easier to share MasterPages among things). What kind of roles? The following users (read: roles) will be using the system: Admins: development users :) (not customer related, full access) Employees: employees of our company (not customer related, full access) Customer SuperUser: top level managers (full access to their account/measurement) Customer ContactPerson: primary contact (full access to their measurement(s)) Customer Manager: a department manager (limited access, specific data of a measurement) What about ASP.Net users? The system will have many ASP.Net users, let's focus on the customer users: Users are not shared between Accounts SuperUser X automatically has access to all (and new) measurements User Y could be Primary contact for Measurement 1, but have no role for Measurement 2 User Y could be Primary contact for Measurement 1, but have a Manager role for Measurement 2 The department managers are many individual users (per Measurement), if Manager Z had a login for Measurement 1, we would like to use that login again if he participates in Measurement 2. URL structure These are typical urls in our application: http://host/login - the login screen http://host/project - the account/product overview screen (measurement selection) http://host/project/1000 - measurement (id:1000) details http://host/project/1000/planning - planning overview (for primary contact/superuser) http://host/project/1000/reports - report downloads (manager department X can only access report X) We will also create a document url, where you can request a specific document by it's GUID. The system will have to check if the user has rights to the document. The document is related to a Measurement, the User or specific roles have specific rights to the document. What's the problem? (finally ;)) Roles aren't enough to determine what a user is allowed to see/access/download a specific item. It's not enough to say that a certain navigation item is accessible to Managers. When the user requests Measurement 1000, we have to check that the user not only has a Manager role, but a Manager role for Measurement 1000. Summarized: How can we limit users to their accounts/measurements? (remember superusers see all measurements, some managers only specific measurements) How can we apply roles at a product/measurement level? (user X could be primarycontact for measurement 1, but just a manager for measurement 2) How can we limit manager access to the reports screen and only to their department's reports? All with the magic of asp.net classes, perhaps with a custom roleprovider implementation. Similar Stackoverflow question/problem http://stackoverflow.com/questions/1367483/asp-net-how-to-manage-users-with-different-types-of-roles

    Read the article

  • error at calling custom web service from plugin

    - by Volodymyr Vykhrushch
    hi guys, I try to call my custom web service which deployed as part of CRM4 and receive the following error: Client found response content type of 'text/html; charset=utf-8', but expected 'text/xml'. The request failed with the error message: -- <html> <head> <title>No Microsoft Dynamics CRM user exists with the specified domain name and user ID</title> <style> ... </style> </head> <body bgcolor="white"> <span><H1>Server Error in '/RecurrenceService' Application.<hr width=100% size=1 color=silver></H1> <h2> <i>No Microsoft Dynamics CRM user exists with the specified domain name and user ID</i> </h2></span> ... <table width=100% bgcolor="#ffffcc"> <tr> <td> <code><pre> [CrmException: No Microsoft Dynamics CRM user exists with the specified domain name and user ID] Microsoft.Crm.Authentication.WindowsAuthenticationProvider.Authenticate(HttpApplication application) +895 Microsoft.Crm.Authentication.AuthenticationStep.Authenticate(HttpApplication application) +125 Microsoft.Crm.Authentication.AuthenticationPipeline.Authenticate(HttpApplication application) +66 Microsoft.Crm.Authentication.AuthenticationEngine.Execute(Object sender, EventArgs e) +513 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +92 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +64 </pre></code> </td> </tr> </table> <br> <hr width=100% size=1 color=silver> <b>Version Information:</b> Microsoft .NET Framework Version:2.0.50727.1433; ASP.NET Version:2.0.50727.1433 </font> </body> </html> <!-- [CrmException]: No Microsoft Dynamics CRM user exists with the specified domain name and user ID at Microsoft.Crm.Authentication.WindowsAuthenticationProvider.Authenticate(HttpApplication application) at Microsoft.Crm.Authentication.AuthenticationStep.Authenticate(HttpApplication application) at Microsoft.Crm.Authentication.AuthenticationPipeline.Authenticate(HttpApplication application) at Microsoft.Crm.Authentication.AuthenticationEngine.Execute(Object sender, EventArgs e) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) --> --. There are some additional data: code for calling my web service: RecurrenceService serv = new RecurrenceService(); serv.Credentials = System.Net.CredentialCache.DefaultCredentials; string result = serv.UpdateSeries(); CRM4 url: "http://cw-dev-5/loader.aspx" custom service url: "http://cw-dev-5/RecurrenceService/RecurrenceService.asmx" the following code snippet System.Security.Principal.WindowsIdentity.GetCurrent().Name return: NT AUTHORITY\NETWORK SERVICE (I suppose it's a cause of error) Could someone suggest me any solution to resolve my issue?

    Read the article

  • Help with 2-part question on ASP.NET MVC and Custom Security Design

    - by JustAProgrammer
    I'm using ASP.NET MVC and I am trying to separate a lot of my logic. Eventually, this application will be pretty big. It's basically a SaaS app that I need to allow for different kinds of clients to access. I have a two part question; the first deals with my general design and the second deals with how to utilize in ASP.NET MVC Primarily, there will initially be an ASP.NET MVC "client" front-end and there will be a set of web-services for third parties to interact with (perhaps mobile, etc). I realize I could have the ASP.NET MVC app interact just through the Web Service but I think that is unnecessary overhead. So, I am creating an API that will essentially be a DLL that the Web App and the Web Services will utilize. The API consists of the main set of business logic and Data Transfer Objects, etc. (So, this includes methods like CreateCustomer, EditProduct, etc for example) Also, my permissions requirements are a little complicated. I can't really use a straight Roles system as I need to have some fine-grained permissions (but all permissions are positive rights). So, I don't think I can really use the ASP.NET Roles/Membership system or if I can it seems like I'd be doing more work than rolling my own. I've used Membership before and for this one I think I'd rather roll my own. Both the Web App and Web Services will need to keep security as a concern. So, my design is kind of like this: Each method in the API will need to verify the security of the caller In the Web App, each "page" ("action" in MVC speak) will also check the user's permissions (So, don't present the user with the "Add Customer" button if the user does not have that right but also whenever the API receives AddCustomer(), check the security too) I think the Web Service really needs the checking in the DLL because it may not always be used in some kind of pre-authenticated context (like using Session/Cookies in a Web App); also having the security checks in the API means I don't really HAVE TO check it in other places if I'm on a mobile (say iPhone) and don't want to do all kinds of checking on the client However, in the Web App I think there will be some duplication of work since the Web App checks the user's security before presenting the user with options, which is ok, but I was thinking of a way to avoid this duplication by allowing the Web App to tell the API not check the security; while the Web Service would always want security to be verified Is this a good method? If not, what's better? If so, what's a good way of implementing this. I was thinking of doing this: In the API, I would have two functions for each action: // Here, "Credential" objects are just something I made up public void AddCustomer(string customerName, Credential credential , bool checkSecurity) { if(checkSecurity) { if(Has_Rights_To_Add_Customer(credential)) // made up for clarity { AddCustomer(customerName); } else // throw an exception or somehow present an error } else AddCustomer(customerName); } public void AddCustomer(string customerName) { // actual logic to add the customer into the DB or whatever // Would it be good for this method to verify that the caller is the Web App // through some method? } So, is this a good design or should I do something differently? My next question is that clearly it doesn't seem like I can really use [Authorize ...] for determining if a user has the permissions to do something. In fact, one action might depend on a variety of permissions and the View might hide or show certain options depending on the permission. What's the best way to do this? Should I have some kind of PermissionSet object that the user carries around throughout the Web App in Session or whatever and the MVC Action method would check if that user can use that Action and then the View will have some ViewData or whatever where it checks the various permissions to do Hide/Show?

    Read the article

  • Entity with Guid ID is not inserted by NHibernate

    - by DanK
    I am experimenting with NHibernate (version 2.1.0.4000) with Fluent NHibernate Automapping. My test set of entities persists fine with default integer IDs I am now trying to use Guid IDs with the entities. Unfortunately changing the Id property to a Guid seems to stop NHibernate inserting objects. Here is the entity class: public class User { public virtual int Id { get; private set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual string Email { get; set; } public virtual string Password { get; set; } public virtual List<UserGroup> Groups { get; set; } } And here is the Fluent NHibernate configuration I am using: SessionFactory = Fluently.Configure() //.Database(SQLiteConfiguration.Standard.InMemory) .Database(MsSqlConfiguration.MsSql2008.ConnectionString(@"Data Source=.\SQLEXPRESS;Initial Catalog=NHibernateTest;Uid=NHibernateTest;Password=password").ShowSql()) .Mappings(m => m.AutoMappings.Add( AutoMap.AssemblyOf<TestEntities.User>() .UseOverridesFromAssemblyOf<UserGroupMappingOverride>())) .ExposeConfiguration(x => { x.SetProperty("current_session_context_class","web"); }) .ExposeConfiguration(Cfg => _configuration = Cfg) .BuildSessionFactory(); Here is the log output when using an integer ID: 16:23:14.287 [4] DEBUG NHibernate.Event.Default.DefaultSaveOrUpdateEventListener - saving transient instance 16:23:14.291 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - saving [TestEntities.User#<null>] 16:23:14.299 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - executing insertions 16:23:14.309 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - executing identity-insert immediately 16:23:14.313 [4] DEBUG NHibernate.Persister.Entity.AbstractEntityPersister - Inserting entity: TestEntities.User (native id) 16:23:14.321 [4] DEBUG NHibernate.AdoNet.AbstractBatcher - Opened new IDbCommand, open IDbCommands: 1 16:23:14.321 [4] DEBUG NHibernate.AdoNet.AbstractBatcher - Building an IDbCommand object for the SqlString: INSERT INTO [User] (FirstName, LastName, Email, Password) VALUES (?, ?, ?, ?); select SCOPE_IDENTITY() 16:23:14.322 [4] DEBUG NHibernate.Persister.Entity.AbstractEntityPersister - Dehydrating entity: [TestEntities.User#<null>] 16:23:14.323 [4] DEBUG NHibernate.Type.StringType - binding null to parameter: 0 16:23:14.323 [4] DEBUG NHibernate.Type.StringType - binding null to parameter: 1 16:23:14.323 [4] DEBUG NHibernate.Type.StringType - binding 'ertr' to parameter: 2 16:23:14.324 [4] DEBUG NHibernate.Type.StringType - binding 'tretret' to parameter: 3 16:23:14.329 [4] DEBUG NHibernate.SQL - INSERT INTO [User] (FirstName, LastName, Email, Password) VALUES (@p0, @p1, @p2, @p3); select SCOPE_IDENTITY();@p0 = NULL, @p1 = NULL, @p2 = 'ertr', @p3 = 'tretret' and here is the output when using a Guid: 16:50:14.008 [4] DEBUG NHibernate.Event.Default.DefaultSaveOrUpdateEventListener - saving transient instance 16:50:14.012 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - generated identifier: d74e1bd3-1c01-46c8-996c-9d370115780d, using strategy: NHibernate.Id.GuidCombGenerator 16:50:14.013 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - saving [TestEntities.User#d74e1bd3-1c01-46c8-996c-9d370115780d] This is where it silently fails, with no exception thrown or further log entries. It looks like it is generating the Guid ID correctly for the new object, but is just not getting any further than that. Is there something I need to do differently in order to use Guid IDs? Thanks, Dan.

    Read the article

  • Appropriate programming design questions.

    - by Edward
    I have a few questions on good programming design. I'm going to first describe the project I'm building so you are better equipped to help me out. I am coding a Remote Assistance Tool similar to TeamViewer, Microsoft Remote Desktop, CrossLoop. It will incorporate concepts like UDP networking (using Lidgren networking library), NAT traversal (since many computers are invisible behind routers nowadays), Mirror Drivers (using DFMirage's Mirror Driver (http://www.demoforge.com/dfmirage.htm) for realtime screen grabbing on the remote computer). That being said, this program has a concept of being a client-server architecture, but I made only one program with both the functionality of client and server. That way, when the user runs my program, they can switch between giving assistance and receiving assistance without having to download a separate client or server module. I have a Windows Form that allows the user to choose between giving assistance and receiving assistance. I have another Windows Form for a file explorer module. I have another Windows Form for a chat module. I have another Windows Form form for a registry editor module. I have another Windows Form for the live control module. So I've got a Form for each module, which raises the first question: 1. Should I process module-specific commands inside the code of the respective Windows Form? Meaning, let's say I get a command with some data that enumerates the remote user's files for a specific directory. Obviously, I would have to update this on the File Explorer Windows Form and add the entries to the ListView. Should I be processing this code inside the Windows Form though? Or should I be handling this in another class (although I have to eventually pass the data to the Form to draw, of course). Or is it like a hybrid in which I process most of the data in another class and pass the final result to the Form to draw? So I've got like 5-6 forms, one for each module. The user starts up my program, enters the remote machine's ID (not IP, ID, because we are registering with an intermediary server to enable NAT traversal), their password, and connects. Now let's suppose the connection is successful. Then the user is presented with a form with all the different modules. So he can open up a File Explorer, or he can mess with the Registry Editor, or he can choose to Chat with his buddy. So now the program is sort of idle, just waiting for the user to do something. If the user opens up Live Control, then the program will be spending most of it's time receiving packets from the remote machine and drawing them to the form to provide a 'live' view. 2. Second design question. A spin off question #1. How would I pass module-specific commands to their respective Windows Forms? What I mean is, I have a class like "NetworkHandler.cs" that checks for messages from the remote machine. NetworkHandler.cs is a static class globally accessible. So let's say I get a command that enumerates the remote user's files for a specific directory. How would I "give" that command to the File Explorer Form. I was thinking of making an OnCommandReceivedEvent inside NetworkHandler, and having each form register to that event. When the NetworkHandler received a command, it would raise the event, all forms would check it to see if it was relevant, and the appropriate form would take action. Is this an appropriate/the best solution available? 3. The networking library I'm using, Lidgren, provides two options for checking networking messages. One can either poll ReadMessage() to return null or a message, or one can use an AutoResetEvent OnMessageReceived (I'm guessing this is like an event). Which one is more appropriate?

    Read the article

  • [C#][Design] Appropriate programming design questions.

    - by Edward
    I have a few questions on good programming design. I'm going to first describe the project I'm building so you are better equipped to help me out. I am coding a Remote Assistance Tool similar to TeamViewer, Microsoft Remote Desktop, CrossLoop. It will incorporate concepts like UDP networking (using Lidgren networking library), NAT traversal (since many computers are invisible behind routers nowadays), Mirror Drivers (using DFMirage's Mirror Driver (http://www.demoforge.com/dfmirage.htm) for realtime screen grabbing on the remote computer). That being said, this program has a concept of being a client-server architecture, but I made only one program with both the functionality of client and server. That way, when the user runs my program, they can switch between giving assistance and receiving assistance without having to download a separate client or server module. I have a Windows Form that allows the user to choose between giving assistance and receiving assistance. I have another Windows Form for a file explorer module. I have another Windows Form for a chat module. I have another Windows Form form for a registry editor module. I have another Windows Form for the live control module. So I've got a Form for each module, which raises the first question: 1. Should I process module-specific commands inside the code of the respective Windows Form? Meaning, let's say I get a command with some data that enumerates the remote user's files for a specific directory. Obviously, I would have to update this on the File Explorer Windows Form and add the entries to the ListView. Should I be processing this code inside the Windows Form though? Or should I be handling this in another class (although I have to eventually pass the data to the Form to draw, of course). Or is it like a hybrid in which I process most of the data in another class and pass the final result to the Form to draw? So I've got like 5-6 forms, one for each module. The user starts up my program, enters the remote machine's ID (not IP, ID, because we are registering with an intermediary server to enable NAT traversal), their password, and connects. Now let's suppose the connection is successful. Then the user is presented with a form with all the different modules. So he can open up a File Explorer, or he can mess with the Registry Editor, or he can choose to Chat with his buddy. So now the program is sort of idle, just waiting for the user to do something. If the user opens up Live Control, then the program will be spending most of it's time receiving packets from the remote machine and drawing them to the form to provide a 'live' view. 2. Second design question. A spin off question #1. How would I pass module-specific commands to their respective Windows Forms? What I mean is, I have a class like "NetworkHandler.cs" that checks for messages from the remote machine. NetworkHandler.cs is a static class globally accessible. So let's say I get a command that enumerates the remote user's files for a specific directory. How would I "give" that command to the File Explorer Form. I was thinking of making an OnCommandReceivedEvent inside NetworkHandler, and having each form register to that event. When the NetworkHandler received a command, it would raise the event, all forms would check it to see if it was relevant, and the appropriate form would take action. Is this an appropriate/the best solution available? 3. The networking library I'm using, Lidgren, provides two options for checking networking messages. One can either poll ReadMessage() to return null or a message, or one can use an AutoResetEvent OnMessageReceived (I'm guessing this is like an event). Which one is more appropriate?

    Read the article

  • C++: casting to void* and back

    - by MInner
    * ---Edit - now the whole sourse* When I debug it on the end, "get" and "value" have different values! Probably, I convert to void* and back to User the wrong way? #include <db_cxx.h> #include <stdio.h> struct User{ User(){} int name; int town; User(int a){}; inline int get_index(int a){ return town; } //for another stuff }; int main(){ try { DbEnv* env = new DbEnv(NULL); env->open("./", DB_CREATE | DB_INIT_MPOOL | DB_THREAD | DB_INIT_LOCK | DB_INIT_TXN | DB_RECOVER | DB_INIT_LOG, 0); Db* datab = new Db(env, 0); datab->open(NULL, "db.dbf", NULL, DB_BTREE, DB_CREATE | DB_AUTO_COMMIT, 0); Dbt key, value, get; char a[10] = "bbaaccd"; User u; u.name = 1; u.town = 34; key.set_data(a); key.set_size(strlen(a) + 1 ); value.set_data((void*)&u); value.set_size(sizeof(u)); get.set_flags(DB_DBT_MALLOC); DbTxn* txn; env->txn_begin(NULL, &txn, 0); datab->put(txn, &key, &value, 0); datab->get(txn, &key, &get, 0); txn->commit(0); User g; g = *((User*)&get); printf("%d", g.town); getchar(); return 0; }catch (DbException &e){ printf("%s", e.what()); getchar(); } solution create a kind of "serializator" what would convert all POD's into void* and then will unite these pieces PS Or I'd rewrite User into POD type and everything will be all right, I hope. Add It's strange, but... I cast a defenetly non-pod object to void* and back (it has std::string inside) and it's all right (without sending it to the db and back). How could it be? And after I cast and send 'trough' db defenetly pod object (no extra methods, all members are pod, it's a simple struct {int a; int b; ...}) I get back dirted one. What's wrong with my approach? Add about week after first 'add' Damn... I've compiled it ones, just for have a look at which kind of dirt it returnes, and oh! it's okay!... I can't ! ... AAh!.. Lord... A reasonable question (in 99.999 percent of situations right answer is 'my', but... here...) - whos is this fault? My or VSs?

    Read the article

  • Moving from MySQL to MySQLi? I have the code here but I don't get it

    - by MuqMan
    I have posted the code there, please help me out as I am a newbie, I don't know much in terms of deprecation and stuff. <?php session_start(); include('settings.php'); $issub = $_POST['issub']; if($issub == "yes") { require('settings.php'); $dbcon = mysql_connect($dbhost, $dbuser, $dbpword); if(!dbcon) { die('Could not connect'.mysql_error()); } $selectdb = mysql_select_db($db, $dbcon); $formset = 'yes'; $val = 0; $user = trim($_POST['username'], ' '); $luser = mysql_real_escape_string($user); $password = $_POST['password']; $lpassword = mysql_real_escape_string($password); $selectdb; $userq = mysql_query("SELECT user FROM users WHERE user='".$luser."'"); $userresult = @mysql_result($userq, 0); //echo $userresult; if($userresult == $user) { $val = $val + 1; $usercorrect = 'yes'; } else { $usercorrect = 'no'; } $dbselect; $passwordq = mysql_query("SELECT password FROM users where user='".$luser."'"); $passresult = @mysql_result($passwordq, 0); if($passresult == sha1($password)) { $val = $val + 1; $passcorrect = 'yes'; } else { $passcorrect = 'no'; } if ($val == 2) { $_SESSION['loggedin'] = 'yes'; $_SESSION['uloggedin'] = $user; header('location: logged.php'); } }?> <?php ini_set('display_errors', 1); require('testinclude.php'); ?> <body> <div id="loginform"> <form action="/login.php" method="post" > <input type="hidden" name="issub" value="yes" /> <?php if($usercorrect == 'no') { echo '<span class="required"><i><small>The email address or password you entered is incorrect, please try again.</a></small></i></span>'; } ?> <br /> email: <?php if ($issub == 'yes') { if($user == null){ echo '<br /><span class="required"><i><small>Please enter your email address</a></small></i></span>'; } } ?> <br /><input type="text" name="username" id="usename" /> <br /> password: <br /><input type="password" name="password" id="password" /> <br /> <input type="submit" value="login" /> </form> <div> </body>

    Read the article

  • I never really understood: what is Application Binary Interface (ABI)?

    - by claws
    I never clearly understood what is an ABI. I'm sorry for such a lengthy question. I just want to clearly understand things. Please don't point me to wiki article, If could understand it, I wouldn't be here posting such a lengthy post. This is my mindset about different interfaces: TV remote is an interface between user and TV. It is an existing entity but useless (doesn't provide any functionality) by itself. All the functionality for each of those buttons on the remote is implemented in the Television set. Interface: It is a "existing entity" layer between the functionality and consumer of that functionality. An, interface by itself is doesn't do anything. It just invokes the functionality lying behind. Now depending on who the user is there are different type of interfaces. Command Line Interface(CLI) commands are the existing entities, consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: commands consumer: user Graphical User Interface(GUI) window,buttons etc.. are the existing entities, again consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: window,buttons etc.. consumer: user Application Programming Interface(API) functions or to be more correct, interfaces (in interfaced based programming) are the existing entities, consumer here is another program not a user. and again functionality lies behind this layer. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: functions, Interfaces(array of functions). consumer: another program/application. Application Binary Interface (ABI) Here is my problem starts. functionality: ??? existing entities: ??? consumer: ??? I've wrote few softwares in different languages and provided different kind of interfaces (CLI, GUI, API) but I'm not sure, if I ever, provided any ABI. http://en.wikipedia.org/wiki/Application_binary_interface says: ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; Other ABIs standardize details such as the C++ name mangling,[2] . exception propagation,[3] and calling convention between compilers on the same platform, but do not require cross-platform compatibility. Who needs these details? Please don't say, OS. I know assembly programming. I know how linking & loading works. I know what exactly happens inside. Where did C++ name mangling come in between? I thought we are talking at the binary level. Where did languages come in between? anyway, I've downloaded the [PDF] System V Application Binary Interface Edition 4.1 (1997-03-18) to see what exactly it contains. Well, most of it didn't make any sense. Why does it contain 2 chapters (4th & 5th) which describe the ELF file format.Infact, these are the only 2 significant chapters that specification. Rest of all the chapters "Processor Specific". Anyway, I thought that it is completely different topic. Please don't say that ELF file format specs are the ABI. It doesn't qualify to be Interface according to the definition. I know, since we are talking at such low level it must be very specific. But I'm not sure how is it "Instruction Set Architecture(ISA)" specific? Where can I find MS Window's ABI? So, these are the major queries that are bugging me.

    Read the article

  • CodePlex Daily Summary for Saturday, March 06, 2010

    CodePlex Daily Summary for Saturday, March 06, 2010New ProjectsAgr.CQRS: Agr.CQRS is a C# framework for DDD applications that use the Command Query Responsibility Segregation pattern (CQRS) and Event Sourcing. BigDays 2010: Big>Days 2010BizTalk - Controlled Admin: Hi .NET folks, I am planning to start project on a Controlled BizTalk Admin tool. This tool will be useful for the organizations which have "Sh...Blacklist of Providers: Blacklist of Providers - the application for department of warehouse logistics (warehouse) at firms.Career Vector: A job board software.Chargify Demo: This is a sample website for ChargifyConceptual: Concept description and animationEric Hexter: My publicly available source code and examplesFluentNHibernate.Search: A Fluent NHibernate.Search mapping interface for NHibernate provider implementation of Lucene.NET.FreelancePlanner: FreelancePlanner is a project tracking tool for freelance translators.HTMLx - JavaScript on the Server for .NET: HTMLx is a set of libraries based on ASP.NET engine to provide JavaScript programmability on the server side. It allows Web developers to use JavaS...IronMSBuild: IronMSBuild is a custom MSBuild Task, which allows you to execute IronRuby scripts. // have to provide some examples LINQ To Blippr: LINQ to Blippr is an open source LINQ Provider for the micro-reviewing service Blippr. LINQ to Blippr makes it easier and more efficent for develo...Luk@sh's HTML Parser: library that simplifies parsing of the HTML documents, for .NETMeta Choons: Unsure as yet but will be a kind of discogs type site but different..NetWork2: NetWork2Regular Expression Chooser: Simple gui for choosing the regular expressions that have become more than simple.See.Sharper: Hopefully useful C# extensions.SharePoint 2010 Toggle User Interface: Toggle the SharePoint 2010 user interface between the new SharePoint 2010 user interface and SharePoint 2007 user interface.Silverlight DiscussionBoard for SharePoint: This is a sharepoint 3.0 webpart that uses a silverlight treeview to display metadata about sharepoint discussions anduses the html bridge to show...Simple Sales Tracking CRM API Wrapper: The Simple Sales Tracking API Wrapper, enables easy extention development and integration with the hosted service at http://www.simplesalestracking...Syntax4Word: A syntax addin for word 2007.TortoiseHg installer builder: TortoiseHg and Mercurial installer builder for Windowsunbinder: Model un binding for route value dictionariesWindows Workflow Foundation on Codeplex: This site has previews of Workflow features which are released out of band for the purposes of adoption and feedback.XNA RSM Render State Manager: Render state management idea for XNA games. Enables isolation between draw calls whilst reducing DX9 SetRenderState calls to the minimum.New ReleasesAgr.CQRS: Sourcecode package: Agr.CQRS is a C# framework for DDD applications that use the Command Query Responsibility Segregation pattern (CQRS) and Event Sourcing. This dow...Book Cataloger: Preview 0.1.6a: New Features: Export to Word 2007 Bibliography format Dictionary list editors for Binding, Condition Improvements: Stability improved Content ...Braintree Client Library: Braintree-1.1.2: Includes minor enhancements to CreditCard and ValidationErrors to support upcoming example application.CassiniDev - Cassini 3.5 Developers Edition: CassiniDev v3.5.0.5: For usage see Readme.htm in download. New in CassiniDev v3.5.0.5 Reintroduced the Lib project and signed all Implemented the CassiniSqlFixture -...Composure: Calcium-64420-VS2010rc1.NET4.SL3: This is a simple conversion of Calcium (rev 64420) built in VS2010 RC1 against .NET4 and Silverlight 3. No source files were changed and ALL test...Composure: MS AJAX Library (46266) for VS2010 RC1 .NET4: This is a quick port of Microsoft's AJAX Library (rev 46266) for Visual Studio 2010 RC1 built against .NET 4.0. Since this conversion was thrown t...Composure: MS Web Test Lightweight for VS2010 RC1 .NET4: A simple conversion of Microsoft's Web Test Lightweight for Visual Studio 2010 RC1 .NET 4.0. This is part of a larger "special request" conversion...CoNatural Components: CoNatural Components 1.5: Supporting new data types: Added support for binary data types -> binary, varbinary, etc maps to byte[] Now supporting SQL Server 2008 new types ...Extensia: Extensia 2010-03-05: Extensia is a very large list of extension methods and a few helper types. Some extension methods are not practical (e.g. slow) whilst others are....Fluent Assertions: Fluent Assertions release 1.1: In this release, we've worked hard to add some important missing features that we really needed, and also improve resiliance against illegal argume...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 1.0 RC: Fluent Ribbon Control Suite 1.0 (Release Candidate)Includes: Fluent.dll (with .pdb and .xml, debug and release version) Showcase Application Sa...FluentNHibernate.Search: 0.1 Beta: First beta versionFolderSize: FolderSize.Win32.1.0.7.0: FolderSize.Win32.1.0.6.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...Free Silverlight & WPF Chart Control - Visifire: Silverlight and WPF Step Line Chart: Hi, With this release Visifire introduces Step Line Chart. This release also contains fix for the following issues: * In WPF, if AnimatedUpd...Html to OpenXml: HtmlToOpenXml 1.0: The dll library to include in your project. The dll is signed for GAC support. Compiled with .Net 3.5, Dependencies on System.Drawing.dll and Docu...Line Counter: 1.5.1: The Line Counter is a tool to calculate lines of your code files. The tool was written in .NET 2.0. Line Counter 1.5.1 Added outline icons and lin...Lokad Cloud - .NET O/C mapper (object to cloud) for Windows Azure: Lokad.Cloud v1.0.662.1: You can get the most recent release directly from the build server at http://build.lokad.com/distrib/Lokad.Cloud/Lost in Translation: LostInTranslation v0.2: Alpha release: function complete but not UX complete.MDownloader: MDownloader-0.15.7.56349: Supported large file resumption. Fixed minor bugs.Mini C# Lab: Mini CSharp Lab Ver 1.4: The primary new feature of Ver 1.4 is batch mode! Now you can run Mini C# Lab program as a scheduled task, no UI interactivity is needed. Here ar...Mobile Store: First drop: First droppatterns & practices SharePoint Guidance: SPG2010 Drop6: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...Picasa Downloader: PicasaDownloader (41446): Changelog: Replaced some exception messages by a Summary dialog shown after downloading if there have been problems. Corrected the Portable vers...Pod Thrower: Version 1: This is the first release, I'm sure there are bugs, the tool is fully functional and I'm using it currently.PowerShell Provider BizTalk: BizTalkFactory PowerShell Provider - 1.1-snapshot: This release constitutes the latest development snapshot for the Provider. Please, leave feedback and use the Issue Tracker to help improve this pr...Resharper Settings Manager: RSM 1.2.1: This is a bug fix release. Changes Fixed plug-in crash when shared settings file was modified externally.Reusable Library Demo: Reusable Library Demo v1.0.2: A demonstration of reusable abstractions for enterprise application developerSharePoint 2010 Toggle User Interface: SharePoint Toggle User Interface: Release 1.0.0.0Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity(net3.5 MySQL) 1.0 Beta: MySQL VS 2008 EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowserTortoiseHg: TortoiseHg 1.0: http://bitbucket.org/tortoisehg/stable/wiki/ReleaseNotes Please backup your user Mercurial.ini file and then uninstall any 0.9.X release before in...Visual Studio 2010 and Team Foundation Server 2010 VM Factory: Rangers Virtualization Guidance: Rangers Virtualization Guidance Focused guidance on creating a Rangers base image manually and introduction of PowerShell scripts to automate many ...Visual Studio DSite: Advanced Email Program (Visual Basic 2008): This email program can send email to any one using your email username and email credentials. The email program can also attatch attactments to you...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.6: Several improvements and bug fixes have gone into the comment parsing code for the registers. The plug-in should now correctly pay attention to th...WSDLGenerator: WSDLGenerator 0.0.0.3: - Fixed SharePoint generated *.wsdl.aspx file - Added commandline option -wsdl which does only generate the wsdl file.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFluent AssertionsComposureDiffPlex - a .NET Diff Generator

    Read the article

  • Using LINQ Distinct: With an Example on ASP.NET MVC SelectListItem

    - by Joe Mayo
    One of the things that might be surprising in the LINQ Distinct standard query operator is that it doesn’t automatically work properly on custom classes. There are reasons for this, which I’ll explain shortly. The example I’ll use in this post focuses on pulling a unique list of names to load into a drop-down list. I’ll explain the sample application, show you typical first shot at Distinct, explain why it won’t work as you expect, and then demonstrate a solution to make Distinct work with any custom class. The technologies I’m using are  LINQ to Twitter, LINQ to Objects, Telerik Extensions for ASP.NET MVC, ASP.NET MVC 2, and Visual Studio 2010. The function of the example program is to show a list of people that I follow.  In Twitter API vernacular, these people are called “Friends”; though I’ve never met most of them in real life. This is part of the ubiquitous language of social networking, and Twitter in particular, so you’ll see my objects named accordingly. Where Distinct comes into play is because I want to have a drop-down list with the names of the friends appearing in the list. Some friends are quite verbose, which means I can’t just extract names from each tweet and populate the drop-down; otherwise, I would end up with many duplicate names. Therefore, Distinct is the appropriate operator to eliminate the extra entries from my friends who tend to be enthusiastic tweeters. The sample doesn’t do anything with the drop-down list and I leave that up to imagination for what it’s practical purpose could be; perhaps a filter for the list if I only want to see a certain person’s tweets or maybe a quick list that I plan to combine with a TextBox and Button to reply to a friend. When the program runs, you’ll need to authenticate with Twitter, because I’m using OAuth (DotNetOpenAuth), for authentication, and then you’ll see the drop-down list of names above the grid with the most recent tweets from friends. Here’s what the application looks like when it runs: As you can see, there is a drop-down list above the grid. The drop-down list is where most of the focus of this article will be. There is some description of the code before we talk about the Distinct operator, but we’ll get there soon. This is an ASP.NET MVC2 application, written with VS 2010. Here’s the View that produces this screen: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TwitterFriendsViewModel>" %> <%@ Import Namespace="DistinctSelectList.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">     Home Page </asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">     <fieldset>         <legend>Twitter Friends</legend>         <div>             <%= Html.DropDownListFor(                     twendVM => twendVM.FriendNames,                     Model.FriendNames,                     "<All Friends>") %>         </div>         <div>             <% Html.Telerik().Grid<TweetViewModel>(Model.Tweets)                    .Name("TwitterFriendsGrid")                    .Columns(cols =>                     {                         cols.Template(col =>                             { %>                                 <img src="<%= col.ImageUrl %>"                                      alt="<%= col.ScreenName %>" />                         <% });                         cols.Bound(col => col.ScreenName);                         cols.Bound(col => col.Tweet);                     })                    .Render(); %>         </div>     </fieldset> </asp:Content> As shown above, the Grid is from Telerik’s Extensions for ASP.NET MVC. The first column is a template that renders the user’s Avatar from a URL provided by the Twitter query. Both the Grid and DropDownListFor display properties that are collections from a TwitterFriendsViewModel class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { /// /// For finding friend info on screen /// public class TwitterFriendsViewModel { /// /// Display names of friends in drop-down list /// public List FriendNames { get; set; } /// /// Display tweets in grid /// public List Tweets { get; set; } } } I created the TwitterFreindsViewModel. The two Lists are what the View consumes to populate the DropDownListFor and Grid. Notice that FriendNames is a List of SelectListItem, which is an MVC class. Another custom class I created is the TweetViewModel (the type of the Tweets List), shown below: namespace DistinctSelectList.Models { /// /// Info on friend tweets /// public class TweetViewModel { /// /// User's avatar /// public string ImageUrl { get; set; } /// /// User's Twitter name /// public string ScreenName { get; set; } /// /// Text containing user's tweet /// public string Tweet { get; set; } } } The initial Twitter query returns much more information than we need for our purposes and this a special class for displaying info in the View.  Now you know about the View and how it’s constructed. Let’s look at the controller next. The controller for this demo performs authentication, data retrieval, data manipulation, and view selection. I’ll skip the description of the authentication because it’s a normal part of using OAuth with LINQ to Twitter. Instead, we’ll drill down and focus on the Distinct operator. However, I’ll show you the entire controller, below,  so that you can see how it all fits together: using System.Linq; using System.Web.Mvc; using DistinctSelectList.Models; using LinqToTwitter; namespace DistinctSelectList.Controllers { [HandleError] public class HomeController : Controller { private MvcOAuthAuthorization auth; private TwitterContext twitterCtx; /// /// Display a list of friends current tweets /// /// public ActionResult Index() { auth = new MvcOAuthAuthorization(InMemoryTokenManager.Instance, InMemoryTokenManager.AccessToken); string accessToken = auth.CompleteAuthorize(); if (accessToken != null) { InMemoryTokenManager.AccessToken = accessToken; } if (auth.CachedCredentialsAvailable) { auth.SignOn(); } else { return auth.BeginAuthorize(); } twitterCtx = new TwitterContext(auth); var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); var twendsVM = new TwitterFriendsViewModel { Tweets = friendTweets, FriendNames = friendNames }; return View(twendsVM); } public ActionResult About() { return View(); } } } The important part of the listing above are the LINQ to Twitter queries for friendTweets and friendNames. Both of these results are used in the subsequent population of the twendsVM instance that is passed to the view. Let’s dissect these two statements for clarification and focus on what is happening with Distinct. The query for friendTweets gets a list of the 20 most recent tweets (as specified by the Twitter API for friend queries) and performs a projection into the custom TweetViewModel class, repeated below for your convenience: var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); The LINQ to Twitter query above simplifies what we need to work with in the View and the reduces the amount of information we have to look at in subsequent queries. Given the friendTweets above, the next query performs another projection into an MVC SelectListItem, which is required for binding to the DropDownList.  This brings us to the focus of this blog post, writing a correct query that uses the Distinct operator. The query below uses LINQ to Objects, querying the friendTweets collection to get friendNames: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); The above implementation of Distinct seems normal, but it is deceptively incorrect. After running the query above, by executing the application, you’ll notice that the drop-down list contains many duplicates.  This will send you back to the code scratching your head, but there’s a reason why this happens. To understand the problem, we must examine how Distinct works in LINQ to Objects. Distinct has two overloads: one without parameters, as shown above, and another that takes a parameter of type IEqualityComparer<T>.  In the case above, no parameters, Distinct will call EqualityComparer<T>.Default behind the scenes to make comparisons as it iterates through the list. You don’t have problems with the built-in types, such as string, int, DateTime, etc, because they all implement IEquatable<T>. However, many .NET Framework classes, such as SelectListItem, don’t implement IEquatable<T>. So, what happens is that EqualityComparer<T>.Default results in a call to Object.Equals, which performs reference equality on reference type objects.  You don’t have this problem with value types because the default implementation of Object.Equals is bitwise equality. However, most of your projections that use Distinct are on classes, just like the SelectListItem used in this demo application. So, the reason why Distinct didn’t produce the results we wanted was because we used a type that doesn’t define its own equality and Distinct used the default reference equality. This resulted in all objects being included in the results because they are all separate instances in memory with unique references. As you might have guessed, the solution to the problem is to use the second overload of Distinct that accepts an IEqualityComparer<T> instance. If you were projecting into your own custom type, you could make that type implement IEqualityComparer<T>, but SelectListItem belongs to the .NET Framework Class Library.  Therefore, the solution is to create a custom type to implement IEqualityComparer<T>, as in the SelectListItemComparer class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { public class SelectListItemComparer : EqualityComparer { public override bool Equals(SelectListItem x, SelectListItem y) { return x.Value.Equals(y.Value); } public override int GetHashCode(SelectListItem obj) { return obj.Value.GetHashCode(); } } } The SelectListItemComparer class above doesn’t implement IEqualityComparer<SelectListItem>, but rather derives from EqualityComparer<SelectListItem>. Microsoft recommends this approach for consistency with the behavior of generic collection classes. However, if your custom type already derives from a base class, go ahead and implement IEqualityComparer<T>, which will still work. EqualityComparer is an abstract class, that implements IEqualityComparer<T> with Equals and GetHashCode abstract methods. For the purposes of this application, the SelectListItem.Value property is sufficient to determine if two items are equal.   Since SelectListItem.Value is type string, the code delegates equality to the string class. The code also delegates the GetHashCode operation to the string class.You might have other criteria in your own object and would need to define what it means for your object to be equal. Now that we have an IEqualityComparer<SelectListItem>, let’s fix the problem. The code below modifies the query where we want distinct values: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct(new SelectListItemComparer()) .ToList(); Notice how the code above passes a new instance of SelectListItemComparer as the parameter to the Distinct operator. Now, when you run the application, the drop-down list will behave as you expect, showing only a unique set of names. In addition to Distinct, other LINQ Standard Query Operators have overloads that accept IEqualityComparer<T>’s, You can use the same techniques as shown here, with SelectListItemComparer, with those other operators as well. Now you know how to resolve problems with getting Distinct to work properly and also have a way to fix problems with other operators that require equality comparisons. @JoeMayo

    Read the article

  • Need a solution to store images (1 billion, 1000,000,000) which users will upload to a website via php or javascript upload [on hold]

    - by wish_you_all_peace
    I need a solution to store images (1 billion) which users will upload to a website via PHP or Javascript upload (website will have 1 billion page views a month using Linux Debian distros) assuming 20 photos per user maximum (10 thumbnails of size 90px by 90px and 10 large, script resized images of having maximum width 500px or maximum height 500px depending on shape of image, meaning square, rectangle, horizontal, vertical etc). Assume this to be a LEMP-stack (Linux Nginx MySQL PHP) social-media or social-matchmaking type application whose content will be text and images. Since everyone knows storing tons of images (website users uploaded images in this case) are bad inside a single directory or NFS etc, please explain all the details about the architecture and configuration of the entire setup of storage solution, to store 1 billion images on any method you recommend (no third-party cloud storage like S3 etc. It has to be within the private data center using our own hardware and resources.). The solution has to include both the storage solution and organizing the images uploaded by users. How will we organize the users images if a single user will not have more than 20 images (10 thumbs and 10 large of having either width or height 500px)? Please consider that this has to be organized in a structural way so we can fetch a single user's images via PHP/Javascript or API programmatically through some type of user's unique identifier(s).

    Read the article

  • Releasing Shrinkr – An ASP.NET MVC Url Shrinking Service

    - by kazimanzurrashid
    Few months back, I started blogging on developing a Url Shrinking Service in ASP.NET MVC, but could not complete it due to my engagement with my professional projects. Recently, I was able to manage some time for this project to complete the remaining features that we planned for the initial release. So I am announcing the official release, the source code is hosted in codeplex, you can also see it live in action over here. The features that we have implemented so far: Public: OpenID Login. Base 36 and 62 based Url generation. 301 and 302 Redirect. Custom Alias. Maintaining Generated Urls of User. Url Thumbnail. Spam Detection through Google Safe Browsing. Preview Page (with google warning). REST based API for URL shrinking (json/xml/text). Control Panel: Application Health monitoring. Marking Url as Spam/Safe. Block/Unblock User. Allow/Disallow User API Access. Manage Banned Domains Manage Banned Ip Address. Manage Reserved Alias. Manage Bad Words. Twitter Notification when spam submitted. Behind the scene it is developed with: Entity Framework 4 (Code Only) ASP.NET MVC 2 AspNetMvcExtensibility Telerik Extensions for ASP.NET MVC (yes you can you use it freely in your open source projects) DotNetOpenAuth Elmah Moq xUnit.net jQuery We will be also be releasing  a minor update in few weeks which will contain some of the popular twitter client plug-ins and samples how to use the REST API, we will also try to include the nHibernate + Spark version in that release. In the next release, not sure about the timeline, we will include the Geo-Coding and some rich reporting for both the User and the Administrators. Enjoy!!!

    Read the article

  • Accessing and Updating Data in ASP.NET: Filtering Data Using a CheckBoxList

    Filtering Database Data with Parameters, an earlier installment in this article series, showed how to filter the data returned by ASP.NET's data source controls. In a nutshell, the data source controls can include parameterized queries whose parameter values are defined via parameter controls. For example, the SqlDataSource can include a parameterized SelectCommand, such as: SELECT * FROM Books WHERE Price > @Price. Here, @Price is a parameter; the value for a parameter can be defined declaratively using a parameter control. ASP.NET offers a variety of parameter controls, including ones that use hard-coded values, ones that retrieve values from the querystring, and ones that retrieve values from session, and others. Perhaps the most useful parameter control is the ControlParameter, which retrieves its value from a Web control on the page. Using the ControlParameter we can filter the data returned by the data source control based on the end user's input. While the ControlParameter works well with most types of Web controls, it does not work as expected with the CheckBoxList control. The ControlParameter is designed to retrieve a single property value from the specified Web control, but the CheckBoxList control does not have a property that returns all of the values of its selected items in a form that the CheckBoxList control can use. Moreover, if you are using the selected CheckBoxList items to query a database you'll quickly find that SQL does not offer out of the box functionality for filtering results based on a user-supplied list of filter criteria. The good news is that with a little bit of effort it is possible to filter data based on the end user's selections in a CheckBoxList control. This article starts with a look at how to get SQL to filter data based on a user-supplied, comma-delimited list of values. Next, it shows how to programmatically construct a comma-delimited list that represents the selected CheckBoxList values and pass that list into the SQL query. Finally, we'll explore creating a custom parameter control to handle this logic declaratively. Read on to learn more! Read More >

    Read the article

  • Windows Phone 7 ActiveSync error 86000C09 (My First Post!)

    - by Chris Heacock
    Hello fellow geeks! I'm kicking off this new blog with an issue that was a real nuisance, but was relatively easy to fix. During a recent Exchange 2003 to 2010 migration, one of the users was getting an error on his Windows Phone 7 device. The error code that popped up on the phone on every sync attempt was 86000C09 We tested the following: Different user on the same device: WORKED Problem user on a different device: FAILED   Seemed to point (conclusively) at the user's account as the crux of the issue. This error can come up if a user has too many devices syncing, but he had no other phones. We verified that using the following command: Get-ActiveSyncDeviceStatistics -Identity USERID Turns out, it was the old familiar inheritable permissions issue in Active Directory. :-/ This user was not an admin, nor had he ever been one. HOWEVER, his account was cloned from an ex-admin user, so the unchecked box stayed unchecked. We checked the box and voila, data started flowing to his device(s). Here's a refresher on enabling Inheritable permissions: Open ADUC, and enable Advanced Features: Then open properties and go to the Security tab for the user in question: Click on Advanced, and the following screen should pop up: Verify that "Include inheritable permissions from this object's parent" is *checked*.   You will notice that for certain users, this box keeps getting unchecked. This is normal behavior due to the inbuilt security of Active Directory. People that are in the following groups will have this flag altered by AD: Account Operators Administrators Backup Operators Domain Admins Domain Controllers Enterprise Admins Print Operators Read-Only Domain Controllers Replicator Schema Admins Server Operators Once the box is cheked, permissions will flow and the user will be set correctly. Even if the box is unchecked, they will function normally as they now has the proper permissions configured. You need to perform this same excercise when enabling users for Lync, but that's another blog. :-)   -Chris

    Read the article

  • How will people upgrade from 12.10 to 14.04 after 13.04 is EOL?

    - by Dave Jones
    Looking at https://wiki.ubuntu.com/Releases 13.04 will reach EOL in January 2014, while 12.10 will reach EOL in April 2014, therefore if a 12.10 user hasn't upgraded to 13.04 and subsequently to 13.10, there will be a 3 month period where a 12.10 user has a supported version of Ubuntu, but will be unable to upgrade. I asked this question a number of months ago and the suggestion was that the hope was that there would be an upgrade path from 12.10 to 14.04. Could somebody confirm whether this is still the case, or if not what the plans are for 12.10 users after 13.04 becomes EOL. Edited for clarification The particular issue I was concerned about is that once 13.04 goes EOL, a 12.10 user would in theory lose the ability to upgrade once the 13.04 repo's are removed from the normal release repository. Using the old releases method would be a way around the issue, however would make it more complicated for a less experienced user. An alternative could be for the 13.04 repo's to be left available for the 3 month interim period so that a 12.10 version could still be upgraded to 13.04 and subsequently onto 13.10, however that doesn't seem an optimal solution in that users may consider that it meant that support for 13.04 was being continued. If a direct upgrade from 12.10 to 14.04 was to made available, this would only be available once 14.04 was released and still leaves the issue of the 3 months between January and April 2014 were there may be some confusion. I suspect that its not going to affect a significant number of users, if somebody has upgraded from 12.04LTS to 12.10, in all probability, they'll have continued to upgrade to 13.04 and upwards because they'd made the choice to use current rather than LTS releases. It would just be useful to have some clarification of the situation which people can be referred to in advance of 13.04 going EOL rather than hitting the cut off point and it being too late for users to make the decision and being left in limbo.

    Read the article

  • CodePlex Daily Summary for Wednesday, January 12, 2011

    CodePlex Daily Summary for Wednesday, January 12, 2011Popular ReleasesGoogle URL Shortener API for .NET: Google URL Shortener API v1: According follow specification: http://code.google.com/apis/urlshortener/v1/reference.htmljGestures: a jQuery plugin for gesture events: 0.81: added event substitution for IE updated index.htmlStyleCop for ReSharper: StyleCop for ReSharper 5.1.14986.000: A considerable amount of work has gone into this release: Features: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the ...SQL Monitor - tracking sql server activities: SQL Monitor 3.1 beta 1: 1. support alert message template 2. dynamic toolbar commands depending on functionality 3. fixed some bugs 4. refactored part of the code, now more stable and more clean upFacebook C# SDK: 4.2.1: - Authentication bug fixes - Updated Json.Net to version 4.0.0 - BREAKING CHANGE: Removed cookieSupport config setting, now automatic. This download is also availible on NuGet: Facebook FacebookWeb FacebookWebMvcUmbraco CMS: Umbraco 4.6: The Umbraco 4.6 (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains nearly 200 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login timeout ...ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OpenStreetMap 1.1 beta2: This is the beta2 release for the ArcGIS Editor for OpenStreetMap version 1.1. Changes from version 1.0: Multi-part geometries are now supported. Homogeneous relations (consisting of only lines or only polygons) are converted into the appropriate multi-part geometry. Mixed relations and super relations are maintained and tracked in a stand-alone relation table. The underlying editing logic has changed. As opposed to tracking the editing changes upon "Save edit" or "Stop edit" the changes a...Hawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlTweetSharp: TweetSharp v2.0.0.0 - Preview 7: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 7 ChangesFixes the regression issue in OAuth from Preview 6 Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Third Party Library VersionsHammock v1.0.6: http://hammock.codeplex.com Json.NET 3.5 Release 8: http://json.codeplex.comExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO + HERMES + Spoof 3.5: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - UltimateJB203PSXXXDEFAULTKAKAROTO=> Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43 - UltimateJB203PS341_HERMES => Pas de spoof mais version hermes 4b - UltimateJB203PS341HERMESSPOOF35X => hermes 4b + spoof des firmwares 3.50 et 3.55 au li....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch...New ProjectsASP.NET MVC Scaffolding: Scaffolding package for ASP.NETAstor: OData Explorer: OData ExplorerBasic Users Community: A simple user community with threads and posts.Bukkit Server Manager: BSM makes server managing easy we have multiple type and database support including: MySql, SQLite types: VPS, Dedicated, Home PCCh4CP: Chamber 4 control programDotNetNuke Telerik Library: A set of Telerik wrappers for DotNetNuke module developers to utilize which aren't yet included as of 5.6.1. Eventually this will be offloaded to the core. Enjoy Life: our fypFolderSizeChecker: It suppose to check the size of big folders in specific partition and help user to find the most disk usage location. (It's simple project so please don't expect big and complex algorithms)HomeTeamOnline: This is project of HomeTeamOnlineICSWorld: This is project of ICSWorldIMAP Client for .NET 4.0 using LumiSoft: Develop an IMAP client using this sample project based on the LumiSoft .NET open source project. This project compiles in .NET 4.0 and demonstrates how to pull email using IMAP. The purpose of the project is for email auto processing.MUIExt (Multilingual User Interface Extender): MUIExt makes it easier for SharePoint 2010 users to create multilingual sites. You'll no longer have to live with the MUI limitations or have to manage variations. It's developed in csharp.Phoenix Service Bus: The goal of this pServiceBus is to provide an API and Service Components that would make implementing an ESB Infrastructure in your environment. It's developed in C#, and also have API written for Javascript Clients PhotoSnapper: Home project just to rename photos or .mov files in a folder starting from from a user defined number.redditfier: A windows application to notify redditors with new posts.SharePoint Field Updater: Automatically update sub fields according to a lookup field. For example: Updating field "Contact" will automatically put "Contact Email" and "Address" in the appropriate text fields.TXLCMS: emptyUmbraco Spark engine: Spark macro engine for UmbracoUrdu Translation: Urdu Translation Project WFTestDesign: BizUnit WF is based on BizUnit solution that allows user to define a test using WorkFlow UI, custom activities designed in this extension and general Workflow activities.It's enable also to use breakpoint in test. It's developed in C#.WPF Date Range Slider: A WPF Date Range Slider user control written with C# to allow your users to choose a range of dates using a double thumbed slider control.WPMind Framework for WP7: This project is used to provide some Windows Phone 7 controls for Windows Phone 7 Silverlight developer. Please join us if you are interested in this project.

    Read the article

  • Using Oracle Enterprise Manager Ops Center to Update Solaris via Live Upgrade

    - by LeonShaner
    Introduction: This Oracle Enterprise Manager Ops Center blog entry provides tips for using Ops Center to update Solaris using Live Upgrade on Solaris 10 and Boot Environments on Solaris 11. Why use Live Upgrade? Live Upgrade (LU) can significantly reduce downtime associated with patching Live Upgrade avoids dropping to single-user mode for long periods of time during patching Live Upgrade relies on an Alternate Boot Environment (ABE)/(BE), which is patched while in multi-user mode; thereby allowing normal system operations to continue with the active BE, while the alternate BE is being patched Activating an newly patched (A)BE is essentially a reboot; therefore the downtime is ~= reboot Admins can easily revert to the prior Boot Environment (BE) as a safeguard / fallback. Why use Ops Center to patch via Live Upgrade, Alternate Boot Environments, and Solaris 11 equivalents? All the benefits of Ops Center's extensive patch and package knowledge base can be leveraged on top of Live Upgrade Ops Center can orchestrate patching based on Live Upgrade and Solaris 11 features, which all works together to minimize downtime Ops Centers advanced inventory and reporting features assurance that each OS is updated to a verifiable, consistent standard, rather than relying on ad-hoc (error prone) procedures and scripts Ops Center gives admins control over the boot environment specifications or they can let Ops Center decide when a BE is necessary, thereby reducing complexity and lowering the opportunity for user error Preparing to use Live Upgrade-like features in Solaris 11 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Solaris 11 has features which are similar in concept to Live Upgrade on Solaris 10, but differ greatly in implementationImportant distinctions: Solaris 11 assumes ZFS root Solaris 11 adds Boot Environments (BE's) as an integrated feature (see beadm) Solaris 11 BE's avoid single-user patching (vs. Solaris 10 w/ ZFS snapshot=ABE). Solaris 11 Image Packaging System (IPS) has hooks for BE creation, as needed Solaris 11 allows pkgs to be installed + upgraded in alternate BE (e.g. instead of the live system) but it is controlled on a per-pkg basis Boot Environments are activated across a reboot; instead of spending long periods installing + upgrading packages in single user mode. Fallback to a prior BE is a function of the BE infrastructure (a la beadm). (Generally) Reboot + BE activation can be much much faster on Solaris 11 Preparing to use Live Upgrade on Solaris 10 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Live Upgrade Pre-requisite patches must be applied before the first Live Upgrade Alternate Boot Environments are created (see "Pre-requisite Patches" section, below...) Solaris 10 Update 6 or newer on ZFS root is the practical starting point for Live Upgrade Live Upgrade with ZFS root is far more straight-forward than any scheme based on Alternative Boot Environments in slices or temporarily breaking mirrors Use Solaris best practices to upgrade the OS to at least Solaris 10 Update 4 (outside of Ops Center) UFS root can (technically) be used, but it is significantly more involved (e.g. discouraged) -- there are many reasons to move to ZFS while going through the process to update to Solaris 10 Update 6 or newer (out side of Ops Center) Recommendation: Start with Solaris 10 Update 6 or newer on ZFS root Recommendation: Start with Ops Center 12c or newer Ops Center 12c can automatically create your ABE's for you, without the need for custom scripts Ops Center 12c Update 2 avoids kernel panic on unpatched Solaris 10 update 9 (and older) -- unrelated to Live Upgrade, but more on the issue, below. NOTE: There is no magic!  If you have systems running Solaris 10 Update 5 or older on UFS root, and you don't know how to get them updated to Solaris 10 on ZFS root, then there are services available from Oracle Advanced Customer Support (ACS), which specialize in this area. Live Upgrade Pre-requisite Patches (Solaris 10) Certain Live Upgrade related patches must be present before the first Live Upgrade ABE's are created on Solaris 10.Use the following MOS Search String to find the “living document” that outlines the required patch minimums, which are necessary before using any Live Upgrade features: Solaris Live Upgrade Software Patch Requirements(Click above – the link is valid as of this writing, but search in MOS for the same "Solaris Live Upgrade Software Patch Requirements" string if necessary) It is a very good idea to check the document periodically and adapt to its contents, accordingly.IMPORTANT:  In case it wasn't clear in the above document, some direct patching of the active OS, including a reboot, may be required before Live Upgrade can be successfully used the first time.HINT: You can use Ops Center to determine what to expect for a given system, and to schedule the “pre-patching” during a maintenance window if necessary. Preparing to use Ops Center Discover + Manage (Install + Configure the Ops Center agent in) each Global Zone Recommendation:  Begin by using OCDoctor --agent-prereq to determine whether OS meets OC prerequisites (resolve any issues) See prior requirements and recommendations w.r.t. starting with Solaris 10 Update 6 or newer on ZFS (or at least Solaris 10 Update 4 on UFS, with caveats) WARNING: Systems running unpatched Solaris 10 update 9 (or older) should run the Ops Center 12c Update 2 agent to avoid a potential kernel panic The 12c Update 2 agent will check patch minimums and disable certain process accounting features if the kernel is not sufficiently patched to avoid the panic SPARC: 142900-05 Obsoleted by: 142900-06 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142901-05 Obsoleted by: 142901-06 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) OR SPARC: 142909-17 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142910-17 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) Ops Center 12c (initial release) and 12c Update 1 agent can also be safely used with a workaround (to be performed BEFORE installing the agent): # mkdir -p /etc/opt/sun/oc # echo "zstat_exacct_allowed=false" > /etc/opt/sun/oc/zstat.conf # chmod 755 /etc/opt/sun /etc/opt/sun/oc # chmod 644 /etc/opt/sun/oc/zstat.conf # chown -Rh root:sys /etc/opt/sun/oc NOTE: Remove the above after patching the OS sufficiently, or after upgrading to the 12c Update 2 agent Using Ops Center to apply Live Upgrade-related Pre-Patches (Solaris 10)Overview: Create an OS Update Profile containing the minimum LU-related pre-patches, based on the Solaris Live Upgrade Software Patch Requirements, previously mentioned. SIMULATE the deployment of the LU-related pre-patches Observe whether any of the LU-related pre-patches will require a reboot The job details for each Global Zone will advise whether a reboot step will be required ACTUALLY deploy the LU-related pre-patches, according to your change control process (e.g. if no reboot, maybe okay to do now; vs. must do later because of the reboot). You can schedule the job to occur later, during a maintenance window Check the job status for each node, resolving any issues found Once the LU-related pre-patches are applied, you can Ops Center to patch using Live Upgrade on Solaris 10 Using Ops Center to patch Solaris 10 with LU/ABE's -- the GOODS!(this is the heart of the tip): Create an OS Update Profile containing the patches that make up your standard build Use Solaris Baselines when possible Add other individual patches as needed ACTUALLY deploy the OS Update Profile Specify the appropriate Live Upgrade options, e.g. Synchronize the active BE to the alternate BE before patching Do not activate the BE after patching Check the job status for each node, resolving any issues found Activate the newly patched BE according to your change control process Activate = Reboot to the ABE, making the ABE the new active BE Ops Center does not separate LU activate from reboot, so expect a reboot! Check the job status for each node, resolving any issues found Examples (w/Screenshots) Solaris 10 and Live Upgrade: Auto-Create the Alternate Boot Environment (ZFS root only) ABE to be created on ZFS with name S10_12_07REC (Example) Uses built in feature to call “lucreate -n S10_12_07REC” behind scenes if not already present NOTE: Leave “lucreate” params blank (if you do specify options, the will be appended after -n $ABEName) Solaris 10 and Live Upgrade: Alternate Boot Environment Creation via Operational Profile (script) The Alternate Boot Environment is to be created via custom, user-supplied script, which does whatever is needed for the system where Live Upgrade will be used. Operational Profile, which provides the script to create an ABE: Very similar to the automatic case, but with a Script (Operational Profile), which is used to create the ABE Relies on user-supplied script in the form of an Operational Profile Could be used to prepare an ABE based on a UFS root in a slice, or on a separate device (e.g. by breaking a mirror first) – it is up to the script author to do the right thing! EXAMPLE: Same result as the ZFS case, but illustrating the Operational Profile (e.g. script) approach to call: # lucreate -n S10_1207REC NOTE: OC special variable is $ABEName Boot Environment Profile, which references the Operational Profile Script = Operational Profile on this screen Refers to Operational Profile shown in the previous section The user-supplied S10_Create_BE Operational Profile will be run The Operational Profile must send a non-zero exit code if there is a problem (so that the OS Update job will not proceed) Solaris 10 OS Update Profile (to provide the actual patch specifications) Solaris 10 Baseline “Recommended” chosen for “Install” Solaris 10 OS Update Plan (two-steps in this case) “Create a Boot Environment” + “Update OS” are chosen. Using Ops Center to patch Solaris 11 with Boot Environments (as needed) Create a Solaris 11 OS Update Profile containing the packages that make up your standard build ACTUALLY deploy the Solaris 11 OS Update Profile BE will be created if needed (or you can stipulate no BE) BE name will be auto-generated (if needed), or you may specify a BE name Check the job status for each node, resolving any issues found Check if a BE was created; if so, activate the new BE Activate = Reboot to the BE, making the new BE the active BE Ops Center does not separate BE activate from reboot NOTE: Not every Solaris 11 OS Update will require a new BE, so a reboot may not be necessary. Solaris 11: Auto BE Create (as Needed -- let Ops Center decide) BE to be created as needed BE to be named automatically Reboot (if necessary) deferred to separate step Solaris 11: OS Profile Solaris 11 “entire” chosen for a particular SRU Solaris 11: OS Update Plan (w/BE)  “Create a Boot Environment” + “Update OS” are chosen. Summary: Solaris 10 Live Upgrade, Alternate Boot Environments, and their equivalents on Solaris 11 can be very powerful tools to help minimize the downtime associated with updating your servers.  For very old Solaris, there are some important prerequisites to adhere to, but once the initial preparation is complete, Live Upgrade can be used going forward.  For Solaris 11, the built-in Boot Environment handling is leveraged directly by the Image Packaging System, and the result is a much more straight forward way to patch, and far fewer prerequisites to satisfy in getting there.  Ops Center simplifies using either approach, and helps you improve consistency from system to system, which ultimately helps you improve the overall up-time across all the Solaris systems in your environment. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >