Search Results

Search found 2436 results on 98 pages for 'verify'.

Page 78/98 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Problems finding classes in namespace and testing extend expected parent

    - by Matt
    So I am in the process of building a site in ASP.Net MVC, and in the process I am adding certain things to my Site.Master I want to make sure that all of my model classes extend a certain base class that contains all of the pieces the Site.Master needs to be operable. I want to test to make sure this assumption isn't broken (I believe this will save me time when I forget about it and can't figure out why a new combination isn't working.) I wrote a test that I thought would help with this, but I am running into two problems. First it isn't finding the one example model class I have so far in the LINQ call all of a sudden, I am admittedly still a bit new to LINQ. Second, I had it finding the class earlier, but I couldn't get it to verify that the class inherits from the base class. Here is the example test. [Test] public void AllModelClassesExtendAbstractViewModel() { var abstractViewModelType = typeof (AbstractViewModel); Assembly baseAssembly = Assembly.GetAssembly(abstractViewModelType); var modelTypes = baseAssembly.GetTypes() .Where(assemblyType => (assemblyType.Namespace.EndsWith("Models") && assemblyType.Name != "AbstractViewModel")) .Select(assemblyType => assemblyType); foreach (var modelType in modelTypes) { Assert.That(modelType.IsSubclassOf(abstractViewModelType), Is.True , modelType.Name + " does not extend AbstractViewModel"); } }

    Read the article

  • ldap login form works, but need to add active-directory group access

    - by Brad
    I created a form that asks you to log in, then verifies the user/pass against the ldap server/active-directory, if successful, it creates a session, which will be checked on every page. Now I want to check the session, which is the username of the person who is logged in, and do a search for them using ldap_search, so I can check what group they belong to and pass that group thru a function to verify that they can view that page. Each page will accessible to a certain group or groups of users, which those groups are defined within Active Directory. I am unsure on how I can do that using ldap_search, or maybe that is just one piece of the puzzle I am trying to solve. Any help is appreciated - thank you! In the example code below, it is seeing if the user belongs to the student active-directory group (I do not know if this code works, but it should give you an idea of what I want to accomplish). $filter = "CN=StudentCN=Users,dc=domain,dc=control"; $result = ldap_search($ldapconn,$filter,$valid_session_username); if($result == TRUE) { print $valid_session_username.' does have access to this page'; } else { print $valid_session_username.' does NOT have access to this page'; }

    Read the article

  • WCF with SSL- not finding localhost

    - by SteveCav
    Hi guys, I'm trying to get WCF to use SSL with ANYTHING for FIVE DAYS now. I've gone through countless walkthroughs, generated more certificates than a mail order diploma company, even tried hot fixes. After working with MS dev tools since VB1, I am now considering flipping burgers as a career option. WCF, as far as I can see, is a complete lemon. Anyway, to get to my actual question: If I run through this walkthrough: http://msdn.microsoft.com/en-us/library/ff648840.aspx I get to step 11 (adding the service reference) and get "There was an error downloading metadata from the address. Please verify that you have entered a valid address". Details of the error gives: There was an error downloading 'https://localhost/SSL6/Service.svc'. Unable to connect to the remote server No connection could be made because the target machine actively refused it 127.0.0.1:443 I'm using VS2008 on Windows 7 with IIS7. I followed the walkthrough exactly (apart from step 5 which was different on IIS7- I went into "SSL Settings" for the VD), so it shows my config (yes I've used httpsGetEnabled and mexHttpsBinding). Anyone care to save my sanity and job?

    Read the article

  • How can you make a PHP application require a key to work?

    - by jasondavis
    About 4 years ago I used a php product called amember pro, it is a membership script which has plugins for lie 30 different payment processors, it was an easy way to set up an automated membership site where users would pay a payment and get access to a certain area. The script used ioncube http://www.ioncube.com/sa_encoder.php to prevent non-paying users from using the script, it requered that you register the domain that the script would be used on, you were then given a key to enter into the file that would make the system/script work. Now I am wanting to know how to do such a task, I know ioncube encoder just makes it hard to see the code, in the script I mention, they would just have a small section at the tp of 1 of the included pages that was encrypted and without that part of the code it would break and in addition if the owner of the script did not put you domain in the list and give you a valid key it would not work, also if you tried to use the script on a different domain it would not work. I realize that somewhere in the encrypted code that is must of sent you key to there server and checked that it was valid for the domain name it is on, or possibly it did not even do that, maybe the key would just verify that it matched the domain the script was on, that more likely what it did. Here is where the real question is, How would you make a script require the portion that is encrypted? If I made a script and had a small encrypted part at the top, it would seem a user would be able to easily just remove the encrypted part and figure out what the non encrypted part is doing and fix it to work. Any ideas?

    Read the article

  • Google Maps API and "rightclick" events on Macs

    - by samc
    Using the Google Maps API (v3), I can create a map and handle normal click events just fine, but when I want to handle rightclick events, it doesn't work on Macs. I assume this is because a rightclick on a Mac is actually converted to a ctrl-click, but the Google Maps API MouseEvent doesn't provide information about modifier keys, so I can't check for the ctrl key. I tried adding an "capture" event listener to the document that converts the click event to a rightclick event. function convertClick(e) { if (e.ctrlKey) { e.button = 2; } } document.addEventListener("click", convertClick, true) I added an alert to verify that the condition is correct, but modifying the event in this way didn't work. So, I decided to have my event handler set a global flag that my click handler could check. If the flag is set, it means ctrl was pressed, so the click handler just invokes the rightclick handler. var ctrl; function captureCtrl(e) { ctrl = e.ctrlKey; } This approach worked great, except for one thing. The ctrl flag gets set for the click after the one that occured when ctrl was pressed. That means the event handler is be called during the bubble phase rather than the capture phase. Could explain why the event modification approach didn't work. So, my question is how can you detect "rightclick" events from Macs with the Google Maps API? I can't be the first person to want to do this. That said, when I right-click on the map on http://maps.google.com from a Windows or Linux machine, I get a popup box with options like "Directions from here...", etc. On a Mac, nothing happens. So, not even the main Google Maps page has solved this problem. ...maybe I am the first person to want to do this.

    Read the article

  • Cannot set property X of undefined - programatically created element

    - by Dave
    I am working on a class to create a fairly complex custom element. I want to be able to define element attributes (mainly CSS) as I am creating the elements. I came up with the below function to quickly set attrs for me given an input array: // Sample array structure of apply_settings //var _button_div = { // syle : { // position: 'absolute', // padding : '2px', // right : 0, // top : 0 // } //}; function __apply_settings(__el,apply_settings){ try{ for(settingKey in apply_settings){ __setting = apply_settings[settingKey]; if(typeof(__setting)=='string'){ __el[settingKey] = __setting; }else{ for(i in __setting){ __el[settingKey][i] = apply_settings[settingKey][i]; } } } }catch(ex){ alert(ex + " " + __el); } } The function works fine except here: function __draw_top(){ var containerDiv = document.createElement('div'); var buttonDiv = document.createElement('div'); __apply_settings(containerDiv,this.settings.top_bar); if(global_settings.show_x) buttonDiv.appendChild(__create_img(global_settings.images.close)); containerDiv.appendChild(buttonDiv); __apply_settings(buttonDiv,this.settings.button_div); return containerDiv; } The specific section that is failing is __apply_settings(buttonDiv,this.settings.button_div); with the error "Cannot set property 'position' of undefined". So naturally, I am assuming that buttonDiv is undefined.. I put alert(ex + " " + __el); in __apply_settings() to verify what I was working with. Surprisingly enough, the element is a div. Is there any reason why this function would be working for containerDiv and not buttonDiv? Any help would be greatly appreciated :) {EDIT} http://jsfiddle.net/Us8Zw/2/

    Read the article

  • Accessing a share point site using the object model.

    - by Prashanth
    Hi I am trying to access a share point site using the SP object model from a console application. I am trying to do something like this.. SPSite site = new SPSite(sitePath) //Operations go here This works fine when the share point site and the console app are on the same machine. However when the console app and the site are on different machines, I get an error "The Web application at "http://server/url" could not be found. Verify that you have typed the URL correctly. If the URL should be serving existing content, the system administrator may need to add a new request URL mapping to the intended application" Here are the things that I have already done: 1) I have tried accessing the site via both IP address as well as machine name, assuming that it could be a DNS resolution issue. 2) Initially I impersonated using a farm admin account, still i could not access. Then I added myself as the farm admin, still no joy. 4) The site is accessible via IE. So it is not a permission issue I guess. 5) I have tried almost all the solutions suggested by various links obtained by googling the error message. I am trying this on share point 2010. A similar issue occurs on 2007 also. Sometimes its kind of frustrating to do SharePoint development , since I get the feeling of stumbling from one error to the next, with no clue as to what could be wrong and the error messages not being helpful in the least :(

    Read the article

  • Rhino Mocks Sample How to Mock Property

    - by guazz
    How can I test that "TestProperty" was set a value when ForgotMyPassword(...) was called? > public interface IUserRepository { User GetUserById(int n); } public interface INotificationSender { void Send(string name); int TestProperty { get; set; } } public class User { public int Id { get; set; } public string Name { get; set; } } public class LoginController { private readonly IUserRepository repository; private readonly INotificationSender sender; public LoginController(IUserRepository repository, INotificationSender sender) { this.repository = repository; this.sender = sender; } public void ForgotMyPassword(int userId) { User user = repository.GetUserById(userId); sender.Send("Changed password for " + user.Name); sender.TestProperty = 1; } } // Sample test to verify that send was called [Test] public void WhenUserForgetPasswordWillSendNotification_WithConstraints() { var userRepository = MockRepository.GenerateStub<IUserRepository>(); var notificationSender = MockRepository.GenerateStub<INotificationSender>(); userRepository.Stub(x => x.GetUserById(5)).Return(new User { Id = 5, Name = "ayende" }); new LoginController(userRepository, notificationSender).ForgotMyPassword(5); notificationSender.AssertWasCalled(x => x.Send(null), options => options.Constraints(Rhino.Mocks.Constraints.Text.StartsWith("Changed"))); }

    Read the article

  • Should my validator have access to my entire model?

    - by wb
    As the title states I'm wondering if it's a good idea for my validation class to have access to all properties from my model. Ideally, I would like to do that because some fields require 10+ other fields to verify whether it is valid or not. I could but would rather not have functions with 10+ parameters. Or would that make the model and validator too coupled with one another? Here is a little example of what I mean. This code however does not work because it give an infinite loop! Class User Private m_UserID Private m_Validator Public Sub Class_Initialize() End Sub Public Property Let Validator(value) Set m_Validator = value m_Validator.Initialize(Me) End Property Public Property Get Validator() Validator = m_Validator End Property Public Property Let UserID(value) m_UserID = value End property Public Property Get UserID() UserID = m_Validator.IsUserIDValid() End property End Class Class Validator Private m_User Public Sub Class_Initialize() End Sub Public Sub Initialize(value) Set m_User = value End Sub Public Function IsUserIDValid() IsUserIDValid = m_User.UserID > 13 End Function End Class Dim mike : Set mike = New User mike.UserID = 123456 mike.Validator = New Validator Response.Write mike.UserID If I'm right and it is a good idea, how can I go a head and fix the infinite loop with the get property UserID? Thank you.

    Read the article

  • Test MVC using moq

    - by Raminder
    I am new to moq and I was trying to test a controller (MVC) behaviour that when the view raises a certain event, controller calls a certain function on model, here are the classes - public class Model { public void CalculateAverage() { ... } ... } public class View { public event EventHandler CalculateAverage; private void RaiseCalculateAverage() { if (CalculateAverage != null) { CalculateAverage(this, EventArgs.Empty); } } ... } public class Controller { private Model model; private View view; public Controller(Model model, View view) { this.model = model this.view = view; view.CalculaeAverage += view_CalculateAverage; } priavate void view_CalculateAverage(object sender, EventArgs args) { model.CalculateAverage(); } } and the test - [Test] public void ModelCalculateAverageCalled() { Mock<Model> modelMock = new Mock<Model>(); Mock<View> viewMock = new Mock<View>(); Controller controller = new Controller(modelMock.Object, viewMock.Object); viewMock.Raise(x => x.CalculateAverage += null, new EventArgs.Empty); modelMock.Verify(x => x.CalculateAverage()); //never comes here, test fails in above line and exits Assert.True(true); } The issue is that the test is failing in the second last line with "Invocation was not performed on the mock: x = x.CalculateAverage()". Another thing I noticed is that the test terminates on this second last line and the last line is never executed. Am I doing everything correct?

    Read the article

  • Using an SHA1 with Microsoft CAPI

    - by Erik Jõgi
    I have an SHA1 hash and I need to sign it. The CryptSignHash() method requires a HCRYPTHASH handle for signing. I create it and as I have the actual hash value already then set it: CryptCreateHash(cryptoProvider, CALG_SHA1, 0, 0, &hash); CryptSetHashParam(hash, HP_HASHVAL, hashBytes, 0); The hashBytes is an array of 20 bytes. However the problem is that the signature produced from this HCRYPTHASH handle is incorrect. I traced the problem down to the fact that CAPI actually doesn't use all 20 bytes from my hashBytes array. For some reason it thinks that SHA1 is only 4 bytes. To verify this I wrote this small program: HCRYPTPROV cryptoProvider; CryptAcquireContext(&cryptoProvider, NULL, NULL, PROV_RSA_FULL, 0); HCRYPTHASH hash; HCRYPTKEY keyForHash; CryptCreateHash(cryptoProvider, CALG_SHA1, keyForHash, 0, &hash); DWORD hashLength; CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0); printf("hashLength: %d\n", hashLength); And this prints out hashLength: 4 ! Can anyone explain what I am doing wrong or why Microsoft CAPI thinks that SHA1 is 4 bytes (32 bits) instead of 20 bytes (160 bits).

    Read the article

  • Programming help Loop adding

    - by Deonna
    I know this probably really simple but Im not sure what im doing wrong... The assignment states: For the second program for this lab, you are to have the user enter an integer value in the range of 10 to 50. You are to verify that the user enters a value in that range, and continue to prompt him until he does give you a value in that range. After the user has successfully entered a value in that range, you are to display the sum of all the integers from 1 to the value entered. I have this so far: #include <iostream.h> int main () { int num, sum; cout << "do-while Loop Example 2" << endl << endl; do { cout << "Enter a value from 10 to 50: "; cin >> num; if (num < 10 || num > 50) cout << "Out of range; Please try again..." << endl; } while (num < 10 || num > 50); { int i; int sum = 0; for (num = 1; num <= 50; num ++) sum = sum + num; } cout << endl << "The sum is " << sum << endl; return 0; } Im just not sure exactly what i'm doing wrong... I keep getting the wrong sum for the total...

    Read the article

  • how to synchronize database table and directory with php

    - by twmulloy
    hello, I have a directory with files and a database table with what should be the same files. I would like to be able to synchronize the database table with the directory. What would be the most efficient way to do this? or would I realistically only be able to do this in a brute manner? Here's my approach: 1. retrieve all of the files in the directory as array 2. retrieve all of the filenames in the database table as array 3. loop through the file values in the directory array and use in_array() on the database table array to verify the filename is in that array, and if not then start building an array to insert the missing filenames. run db query to add each missing file row to database table 4. loop through directory array and use in_array() on the directory array and anything not found in the directory array will just be deleted from the table. Is there a better way to go about this? or something better for this in php than in_array()?

    Read the article

  • How to fetch populated associated models in CakePHP when calling read()

    - by Code Commander
    I have the following Models: class Site extends AppModel { public $name = "Site"; public $useTable = "site"; public $primaryKey = "id"; public $displayField = 'name'; public $hasMany = array('Item' => array('foreignKey' => 'siteId')); public function canView($userId, $isAdmin = false) { if($isAdmin) { return true; } return array_key_exists($this->id, $allowedSites); } } and class Item extends AppModel { public $name = "Item"; public $useTable = "item"; public $primaryKey = "id"; public $displayField = 'name'; public $belongsTo = array('Site' => array('foreignKey' => 'siteId')); public function canView($userId, $isAdmin = false) { // My problem appears to be the next line: return $this->Site->canView($userId, $isAdmin); } } In my controller I am doing something like this: $result = $this->Item->read(null, $this->request->id); // Verify permissions if(!$this->Item->canView($this->Session->read('userId'), $this->Session->read('isAdmin'))) { $this->httpCodes(403); die('Permission denied.'); } I notice that in Item->canView() $this->data['Site'] is populated with the column data from the site table. But it merely an array and not an object. On the other hand $this->Site is a Site object, but it has not been populated with the column data from the site table like $this->data. What is the proper way to have CakePHP get the associated model as the object and containing the data? Or am I going about this all wrong? Thanks!

    Read the article

  • Why are Facebook Likes Insisting on using Wrong Product Image...?

    - by Joan Kent
    Firstly, I'm not a web developer so please be patient. I have read the other posts but I think i have everything covered. My website http://www.joaniesgifts.co.uk includes the like button on the product pages. However, I've found that certain product pages are using the incorrect image when a user likes the page. For example - http://www.joaniesgifts.co.uk/terramundi-money-pots/terramundi-money-pot-holiday-fund I think this may have been down to an original incorrect setup which is now corrected. However, the problem remains... The only thing I have to go on :- if i use the facebook url linter (developers.facebook.com/tools/debug) on the above product page, I receive the following error :- Object at URL 'http://www.joaniesgifts.co.uk/terramundi-money-pot-holiday-fund' of type '213689662010141:product' is invalid because the domain 'www.joaniesgifts.co.uk' is not allowed for the application id '213689662010141' which owns the specified object type. If you are the owner of this application, you can verify your configured 'Site Domain' at developers.facebook.com/apps/213689662010141. (I have verified my site's domain) Everything else appears fine except it is also showing the wrong image!! However, under Raw Open Graph Document Information it has the correct link! If I then click graph api - graph.facebook.com/10150450766583352 it again shows the wrong image was linked! I've no idea what else to do - can you help me? Kind Regards, Joan PS Graph API shows the incorrect image after a scrape only minutes ago { "url": "http://www.joaniesgifts.co.uk/terramundi-money-pot-holiday-fund", "type": "website", "title": "Terramundi Money Pot - Holiday Fund", "image": [ { "url": "http://www.joaniesgifts.co.uk/index.php?route=product\u00252Fproduct\u00252Fcaptcha" } ], "updated_time": "2011-11-11T18:54:38+0000", "id": "10150450766583352" }

    Read the article

  • Using an SHA1 with Micrsoft CAPI

    - by Erik Jõgi
    Hello, I have an SHA1 hash and I need to sign it. The CryptSignHash() method requires a HCRYPTHASH handle for signing. I create it and as I have the actual hash value already then set it: CryptCreateHash(cryptoProvider, CALG_SHA1, 0, 0, &hash); CryptSetHashParam(hash, HP_HASHVAL, hashBytes, 0); The hashBytes is an array of 20 bytes. However the problem is that the signature produced from this HCRYPTHASH handle is incorrect. I traced the problem down to the fact that CAPI actually doesn't use all 20 bytes from my hashBytes array. For some reason it thinks that SHA1 is only 4 bytes. To verify this I wrote this small program: HCRYPTPROV cryptoProvider; CryptAcquireContext(&cryptoProvider, NULL, NULL, PROV_RSA_FULL, 0); HCRYPTHASH hash; HCRYPTKEY keyForHash; CryptCreateHash(cryptoProvider, CALG_SHA1, keyForHash, 0, &hash); DWORD hashLength; CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0); printf("hashLength: %d\n", hashLength); And this prints out hashLength: 4 ! Can anyone explain what I am doing wrong or why Microsoft CAPI thinks that SHA1 is 4 bytes (32 bits) instead of 20 bytes (160 bits). Thank you.

    Read the article

  • How do I write an RSpec test to unit-test this interesting metaprogramming code?

    - by Kyle Kaitan
    Here's some simple code that, for each argument specified, will add specific get/set methods named after that argument. If you write attr_option :foo, :bar, then you will see #foo/foo= and #bar/bar= instance methods on Config: module Configurator class Config def initialize() @options = {} end def self.attr_option(*args) args.each do |a| if not self.method_defined?(a) define_method "#{a}" do @options[:"#{a}"] ||= {} end define_method "#{a}=" do |v| @options[:"#{a}"] = v end else throw Exception.new("already have attr_option for #{a}") end end end end end So far, so good. I want to write some RSpec tests to verify this code is actually doing what it's supposed to. But there's a problem! If I invoke attr_option :foo in one of the test methods, that method is now forever defined in Config. So a subsequent test will fail when it shouldn't, because foo is already defined: it "should support a specified option" do c = Configurator::Config c.attr_option :foo # ... end it "should support multiple options" do c = Configurator::Config c.attr_option :foo, :bar, :baz # Error! :foo already defined # by a previous test. # ... end Is there a way I can give each test an anonymous "clone" of the Config class which is independent of the others?

    Read the article

  • How can I have 2 ADO access methods use the same Transaction?

    - by KevinDeus
    I'm writing a test to see if my LINQ to Entity statement works.. I'll be using this for others if I can get this concept going.. my intention here is to INSERT a record with ADO, then verify it can be queried with LINQ, and then ROLLBACK the whole thing at the end. I'm using ADO to insert because I don't want to use the object or the entity model that I am testing. I figure that a plain ADO INSERT should do fine. problem is.. they both use different types of connections. is it possible to have these 2 different data access methods use the same TRANSACTION so I can roll it back?? _conn = new SqlConnection(_connectionString); _conn.Open(); _trans = _conn.BeginTransaction(); var x = new SqlCommand("INSERT INTO Table1(ID, LastName, FirstName, DateOfBirth) values('127', 'test2', 'user', '2-12-1939');", _conn); x.ExecuteNonQuery(); //So far, so good. Adding a record to the table. //at this point, we need to do **_trans.Commit()** here because our Entity code can't use the same connection. Then I have to manually delete in the TestHarness.TearDown.. I'd like to eliminate this step //(this code is in another object, I'll include it for brevity. Imagine that I passed the connection in) //check to see if it is there using (var ctx = new XEntities(_conn)) //can't do this.. _conn is not an EntityConnection! { var retVal = (from m in ctx.Table1 where m.first_name == "test2" where m.last_name == "user" where m.Date_of_Birth == "2-12-1939" where m.ID == 127 select m).FirstOrDefault(); return (retVal != null); } //Do test.. Assert.BlahBlah(); _trans.Rollback();

    Read the article

  • Cannot connect to SQL Server from ASP.NET MVC app

    - by Dan Fergus
    I have an ASP.NET MVC app that has on a hosted server for over a year, connecting to SQL Server. I've had to change hosting services, the new one supports MVC 1.0. I've also moved a non MVC ASP app to the same hosting service. Now, MY MVC based app retturnes this error when I try to validate a user login. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Now, the non-MVC app can access the exact same database and authenticate users just fine. The MVC app, when run from my dev box connects fine. It also run/connects/authenticates without problem when I install and run the site from an internal SQL 2008 server running IIS 7. I, along with the hosting support techs, am at a loss how the exact same connect string works every where except on the hosted server, and only when run from inside an ASP.NET MVC web app. Any ideas would be greatly appreciated.

    Read the article

  • Browser window popups - risks and special features

    - by Sandeepan Nath
    1. What exactly is the security risk with popups? The new browsers provide settings to block window popups (on blocking, sites with active popups display a message to user). What exactly is the security risk with popups? If allowing popups can execute something dangerous, then the main window can too. Is it not the case. I think I don't know about some special powers of window popups. 2. Any special features of popup windows? Take for example the HDFC bank netbanking site. The entire netbanking session happens in a new window popup and a user neither manually edit the URL or paste the URL in the main browser window. it does not work. Is a popup window needed for this feature? Does it improve security? (Asking because everything that is there in this site revolves around security - so they must have done that for a reason too). Why otherwise they would implement the entire netbanking on a popup window? 3. Is it possible to override browser's popup blocking settings Lastly, the HDFC site succcessfully displays popup window even when in the browser settings popups are blocked. So, how do they do it? Is that a browser hack? To see this - go to http://hdfcbank.com/ Under the "Login to your account" section select "HDFC Bank NetBanking" and click the "Login" button. You can verify that even if popups are blocked/popup blocker is enabled in the browser settings, this site is able to display popups. The answers to this question say that it is not possible to display popup windows if it has been blocked in browser settings. Solved Concluded with Pointy's solution and comments under that. Here is a fiddle demonstrating the same.

    Read the article

  • Still Confused About Identifying vs. Non-Identifying Relationships

    - by Jason
    So, I've been reading up on identifying vs. non-identifying relationships in my database design, and a number of the answers on SO seem contradicting to me. Here are the two questions I am looking at: What's the Difference Between Identifying and Non-Identifying Relationships Trouble Deciding on Identifying or Non-Identifying Relationship Looking at the top answers from each question, I appear to get two different ideas of what an identifying relationship is. The first question's response says that an identifying relationship "describes a situation in which the existence of a row in the child table depends on a row in the parent table." An example of this that is given is, "An author can write many books (1-to-n relationship), but a book cannot exist without an author." That makes sense to me. However, when I read the response to question two, I get confused as it says, "if a child identifies its parent, it is an identifying relationship." The answer then goes on to give examples such as SSN (is identifying of a Person), but an address is not (because many people can live at an address). To me, this sounds more like a case of the decision between primary key and non-primary key. My own gut feeling (and additional research on other sites) points to the first question and its response being correct. However, I wanted to verify before I continued forward as I don't want to learn something wrong as I am working to understand database design. Thanks in advance.

    Read the article

  • application trying to connect to mirrored sql db

    - by hp
    Hello, We have 4 web servers that host our asp.net (3.5) application. Randomly, we get error messages like : 1) "Login failed for user 'userid'" 2) "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)" we are running sql2005 and have a principle and a mirror db (sync). When these exceptions are thrown, I look at the SQL error logs on the mirrored db and noticed the failed login messages in there. The principle db is running fine and the other web apps are working great. this will happen for maybe 10 min, then the app pool recycles and it starts hitting the principle db again. Is there a configuration I have incorrect? my theory is that our principle db is forwarding the request to the mirror, but that should never happen. any help??

    Read the article

  • WCF Async callback setup for polled device

    - by Mark Pim
    I have a WCF service setup to control a USB fingerprint reader from our .Net applications. This works fine and I can ask it to enroll users and so on. The reader allows identification (it tells you that a particular user has presented their finger, as opposed to asking it to verify that a particular user's finger is present), but the device must be constantly polled while in identification mode for its status - when a user is detected the status changes. What I want is for an interested application to notify the service that it wants to know when a user is identified, and provide a callback that gets triggered when this happens. The WCF service will return immediately and spawn a thread in the background to continuously poll the device. This polling could go on for hours at a time if no one tries to log in. What's the best way to acheive this? My service contract is currently defined as follows: [ServiceContract (CallbackContract=typeof(IBiometricCallback))] public interface IBiometricWcfService { ... [OperationContract (IsOneWay = true)] void BeginIdentification(); ... } public interface IBiometricCallback { ... [OperationContract(IsOneWay = true)] void IdentificationFinished(int aUserId, string aMessage, bool aSuccess); ... } In my BeginIdentification() method can I easily spawn a worker thread to poll the device, or is it easier to make the WCF service asynchronous?

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • SSL + Jquery + Ajax

    - by chobo2
    Hi I starting too look at a bit of security into my site. My site I would consider a very low security risk as it has really no personal information from the user other than email. However the security risk will go up a bit as I am partnering with a company and the initial password for this companies users will be the same password they use essentially to get onto the network and every piece of software. So I have up my security( what is fine by me...I wanted to get around to this anyways). So one of my security concerns is this. A user logs in. form submit(non ajax is done). Password is hashed & Salted and compared to one in the database. Reject or let them proceed. So this uses no jquery or ajax but is just asp.net mvc and C#. Still if my understanding is right the password is sent in clear text. So if a use SSL and I would not need to worry about that is this correct? If that is true is that all I need? Second the user can change their password at anytime. This is done through ajax. So when the password is sent it is sent in clear text( and I can verify this by looking at firebug). So if I have SSL enabled on this page is that all I need or do I need to do more? So I am just kinda confused of what I need to make the password being sent to the server(both ajax and full post ways secure). I am not sure if I need to do more then SSL or if that is enough and if it is not enough what is the next layer of security?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >