Search Results

Search found 26412 results on 1057 pages for 'product key'.

Page 185/1057 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • How to map a Dictionary<string, string> spanning several tables

    - by Kim Johansson
    I have four tables: CREATE TABLE [Languages] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Code] NVARCHAR(10) NOT NULL, PRIMARY KEY ([Id]), UNIQUE INDEX ([Code]) ); CREATE TABLE [Words] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, PRIMARY KEY ([Id]) ); CREATE TABLE [WordTranslations] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Value] NVARCHAR(100) NOT NULL, [Word] INTEGER NOT NULL, [Language] INTEGER NOT NULL, PRIMARY KEY ([Id]), FOREIGN KEY ([Word]) REFERENCES [Words] ([Id]), FOREIGN KEY ([Language]) REFERENCES [Languages] ([Id]) ); CREATE TABLE [Categories] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Word] INTEGER NOT NULL, PRIMARY KEY ([Id]), FOREIGN KEY ([Word]) REFERENCES [Words] ([Id]) ); So you get the name of a Category via the Word - WordTranslation - Language relations. Like this: SELECT TOP 1 wt.Value FROM [Categories] AS c LEFT JOIN [WordTranslations] AS wt ON c.Word = wt.Word WHERE wt.Language = ( SELECT TOP 1 l.Id FROM [Languages] WHERE l.[Code] = N'en-US' ) AND c.Id = 1; That would return the en-US translation of the Category with Id = 1. My question is how to map this using the following class: public class Category { public virtual int Id { get; set; } public virtual IDictionary<string, string> Translations { get; set; } } Getting the same as the SQL query above would be: Category category = session.Get<Category>(1); string name = category.Translations["en-US"]; And "name" would now contain the Category's name in en-US. Category is mapped against the Categories table. How would you do this and is it even possible?

    Read the article

  • How to append \line into RTF using RichTextBox control

    - by Steve Sheldon
    When using the Microsoft RichTextBox control it is possible to add new lines like this... richtextbox.AppendText(System.Environment.NewLine); // appends \r\n However, if you now view the generated rtf the \r\n characters are converted to \par not \line How do I insert a \line control code into the generated RTF? What does't work: Token Replacement Hacks like inserting a token at the end of the string and then replacing it after the fact, so something like this: string text = "my text"; text = text.Replace("||" "|"); // replace any '|' chars with a double '||' so they aren't confused in the output. text = text.Replace("\r\n", "_|0|_"); // replace \r\n with a placeholder of |0| richtextbox.AppendText(text); string rtf = richtextbox.Rtf; rtf.Replace("_|0|_", "\\line"); // replace placeholder with \line rtf.Replace("||", "|"); // set back any || chars to | This almost worked, it breaks down if you have to support right to left text as the right to left control sequence always ends up in the middle of the placeholder. Sending Key Messages public void AppendNewLine() { Keys[] keys = new Keys[] {Keys.Shift, Keys.Return}; SendKeys(keys); } private void SendKeys(Keys[] keys) { foreach(Keys key in keys) { SendKeyDown(key); } } private void SendKeyDown(Keys key) { user32.SendMessage(this.Handle, Messages.WM_KEYDOWN, (int)key, 0); } private void SendKeyUp(Keys key) { user32.SendMessage(this.Handle, Messages.WM_KEYUP, (int)key, 0); } This also ends up being converted to a \par Is there a way to post a messaged directly to the msftedit control to insert a control character? I am totally stumped, any ideas guys? Thanks for your help!

    Read the article

  • keyUp event heard?: Overridden NSView method

    - by Old McStopher
    UPDATED: I'm now overriding the NSView keyUp method from a NSView subclass set to first responder like below, but am still not seeing evidence that it is being called. @implementation svsView - (BOOL)acceptsFirstResponder { return YES; } - (void)keyUp:(NSEvent *)event { //--do key up stuff-- NSLog(@"key up'd!"); } @end --ORIGINAL POST-- I'm new to Cocoa and Obj-C and am trying to do a (void)keyUp: from within the implementation of my controller class (which itself is of type NSController). I'm not sure if this is the right place to put it, though. I have a series of like buttons each set to a unique key equivalent (IB button attribute) and each calls my (IBAction)keyInput method which then passes the identity of each key onto another object. This runs just fine, but I also want to track when each key is released. --ORIGINAL [bad] EXAMPLE-- @implementation svsController //init //IBActions - (IBAction)keyInput:(id)sender { //--do key down stuff-- } - (void)keyUp:(NSEvent *)event { //--do key up stuff-- } @end Upon fail, I also tried the keyUp as an IBAction (instead of void), like the user-defined keyInput is, and hooked it up to the appropriate buttons in Interface Builder, but then keyUp was only called when the keys were down and not when released. (Which I kind of figured would happen.) Pardon my noobery, but should I be putting this method in another class or doing something differently? Wherever it is, though, I need it be able to access objects owned by the controller class. Thanks for any insight you may have.

    Read the article

  • How do I cast a void pointer to a struct in C?

    - by Rowhawn
    In a project I'm writing code for, I have a void pointer, "implementation", which is a member of a "Hash_map" struct, and points to an "Array_hash_map" struct. The concepts behind this project are not very realistic, but bear with me. The specifications of the project ask that I cast the void pointer "implementation" to an "Array_hash_map" before I can use it in any functions. My question, specifically is, what do I do in the functions to cast the void pointers to the desired struct? Is there one statement at the top of each function that casts them or do I make the cast every time I use "implementation"? Here are the typedefs the structs of a Hash_map and Array_hash_map as well as a couple functions making use of them. typedef struct { Key_compare_fn key_compare_fn; Key_delete_fn key_delete_fn; Data_compare_fn data_compare_fn; Data_delete_fn data_delete_fn; void *implementation; } Hash_map; typedef struct Array_hash_map{ struct Unit *array; int size; int capacity; } Array_hash_map; typedef struct Unit{ Key key; Data data; } Unit; functions: /* Sets the value parameter to the value associated with the key parameter in the Hash_map. */ int get(Hash_map *map, Key key, Data *value){ int i; if (map == NULL || value == NULL) return 0; for (i = 0; i < map->implementation->size; i++){ if (map->key_compare_fn(map->implementation->array[i].key, key) == 0){ *value = map->implementation->array[i].data; return 1; } } return 0; } /* Returns the number of values that can be stored in the Hash_map, since it is represented by an array. */ int current_capacity(Hash_map map){ return map.implementation->capacity; }

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • .NET XML serialization gotchas?

    - by kurious
    I've run into a few gotchas when doing C# XML serialization that I thought I'd share: You can't serialize items that are read-only (like KeyValuePairs) You can't serialize a generic dictionary. Instead, try this wrapper class (from http://weblogs.asp.net/pwelter34/archive/2006/05/03/444961.aspx): using System; using System.Collections.Generic; using System.Text; using System.Xml.Serialization; [XmlRoot("dictionary")] public class SerializableDictionary<TKey, TValue> : Dictionary<TKey, TValue>, IXmlSerializable { public System.Xml.Schema.XmlSchema GetSchema() { return null; } public void ReadXml(System.Xml.XmlReader reader) { XmlSerializer keySerializer = new XmlSerializer(typeof(TKey)); XmlSerializer valueSerializer = new XmlSerializer(typeof(TValue)); bool wasEmpty = reader.IsEmptyElement; reader.Read(); if (wasEmpty) return; while (reader.NodeType != System.Xml.XmlNodeType.EndElement) { reader.ReadStartElement("item"); reader.ReadStartElement("key"); TKey key = (TKey)keySerializer.Deserialize(reader); reader.ReadEndElement(); reader.ReadStartElement("value"); TValue value = (TValue)valueSerializer.Deserialize(reader); reader.ReadEndElement(); this.Add(key, value); reader.ReadEndElement(); reader.MoveToContent(); } reader.ReadEndElement(); } public void WriteXml(System.Xml.XmlWriter writer) { XmlSerializer keySerializer = new XmlSerializer(typeof(TKey)); XmlSerializer valueSerializer = new XmlSerializer(typeof(TValue)); foreach (TKey key in this.Keys) { writer.WriteStartElement("item"); writer.WriteStartElement("key"); keySerializer.Serialize(writer, key); writer.WriteEndElement(); writer.WriteStartElement("value"); TValue value = this[key]; valueSerializer.Serialize(writer, value); writer.WriteEndElement(); writer.WriteEndElement(); } } } Any other XML gotchas out there?

    Read the article

  • Bing search API and Azure

    - by Gapton
    I am trying to programatically perform a search on Microsoft Bing search engine. Here is my understanding: There was a Bing Search API 2.0 , which will be replaced soon (1st Aug 2012) The new API is known as Windows Azure Marketplace. You use different URL for the two. In the old API (Bing Search API 2.0), you specify a key (Application ID) in the URL, and such key will be used to authenticate the request. As long as you have the key as a parameter in the URL, you can obtain the results. In the new API (Windows Azure Marketplace), you do NOT include the key (Account Key) in the URL. Instead, you put in a query URL, then the server will ask for your credentials. When using a browser, there will be a pop-up asking for a/c name and password. Instruction was to leave the account name blank and insert your key in the password field. Okay, I have done all that and I can see a JSON-formatted results of my search on my browser page. How do I do this programmatically in PHP? I tried searching for the documentation and sample code from Microsoft MSDN library, but I was either searching in the wrong place, or there are extremely limited resources in there. Would anyone be able to tell me how do you do the "enter the key in the password field in the pop-up" part in PHP please? Thanks alot in advance.

    Read the article

  • assign a model's attribute through association

    - by justcode
    I'm new to rails and working on a rails app and I'm stuck pondering this issue. I have three models class product < ActiveRecord::Base attr_accessible :name, :issn, :category validates_presence_of :name, :issn, :category validates_numericality_of :issn, :message => "has to be a number" has_many :user_products has_many :users, :through => :user_products end class UserProduct < ActiveRecord::Base attr_accessible :price, :category validates_presence_of :price, :category validates_numericality_of :price, :message = "has to be a number" belongs_to :user belongs_to :product end class user < ActiveRecord::Base # devise authentication here # Setup accessible (or protected) attributes for your model attr_accessible :email, :password, :password_confirmation, :remember_me has_many :user_products has_many :products, :through = :user_products end here is my new.html.erb <div class="MainBodyWrapper"> <div class="span8"> <div id="listBoxWrapper"> <fieldset> <%= form_for(@product, :html => { :class => "form-inline" }, :style => "margin-bottom: 60px" ) do |f| %> <div class="control-group"> <label class="control-label" for="name">name</label> <div class="controls"> <%= f.text_field :price, :class => 'input-xlarge input-name', :id => "name" %> </div> </div> <div class="listingButtons"> <button class="btn btn-info"></i>Add</button> <a class="btn">Upload Pictures (anytime)</a> </div> </fieldset> </div> </div> There are reasons why I want to setup the models this way. So the question is this: I want the user to enter the info for the product in the form but it also involves putting in the price of the product which exists in a different model/table (user_product) that is associated with product. How can I do this? You can see that my form_for uses @product. Any help will be appreciated.

    Read the article

  • Cannot .Count() on IQueryable (NHibernate)

    - by Bruno Reis
    Hello, I'm with an irritating problem. It might be something stupid, but I couldn't find out. I'm using Linq to NHibernate, and I would like to count how many items are there in a repository. Here is a very simplified definition of my repository, with the code that matters: public class Repository { private ISession session; /* ... */ public virtual IQueryable<Product> GetAll() { return session.Linq<Product>(); } } All the relevant code in the end of the question. Then, to count the items on my repository, I do something like: var total = productRepository.GetAll().Count(); The problem is that total is 0. Always. However there are items in the repository. Furthermore, I can .Get(id) any of them. My NHibernate log shows that the following query was executed: SELECT count(*) as y0_ FROM [Product] this_ WHERE not (1=1) That must be that "WHERE not (1=1)" clause the cause of this problem. What can I do to be able .Count() the items in my repository? Thanks! EDIT: Actually the repository.GetAll() code is a little bit different... and that might change something! It is actually a generic repository for Entities. Some of the entities implement also the ILogicalDeletable interface (it contains a single bool property "IsDeleted"). Just before the "return" inside the GetAll() method I check if if the Entity I'm querying implements ILogicalDeletable. public interface IRepository<TEntity, TId> where TEntity : Entity<TEntity, TId> { IQueryable<TEntity> GetAll(); ... } public abstract class Repository<TEntity, TId> : IRepository<TEntity, TId> where TEntity : Entity<TEntity, TId> { public virtual IQueryable<TEntity> GetAll() { if (typeof (ILogicalDeletable).IsAssignableFrom(typeof (TEntity))) { return session.Linq<TEntity>() .Where(x => (x as ILogicalDeletable).IsDeleted == false); } else { return session.Linq<TEntity>(); } } } public interface ILogicalDeletable { bool IsDeleted {get; set;} } public Product : Entity<Product, int>, ILogicalDeletable { ... } public IProductRepository : IRepository<Product, int> {} public ProductRepository : Repository<Product, int>, IProductRepository {} Edit 2: actually the .GetAll() is always returning an empty result-set for entities that implement the ILogicalDeletable interface (ie, it ALWAYS add a WHERE NOT (1=1) clause. I think Linq to NHibernate does not like the typecast.

    Read the article

  • Confusion in MIPS code

    - by Haya Hallian
    While going through the MIPS code I got some confusion. Code is shown as follows .data key: .ascii "key: " # "key: \n" char: .asciiz " \n" .text .globl main main: jal getchar la $a0, char # $a0 contains address of char variable (" \n") sb $v0, ($a0) # replace " " in char with v0, which is read_character (X) la $a0, key # now a0 will contain, address of "key: " "X\n" What I dont understand is that how load address instruction works. First a0 contained address of char variable. In next line we are storing value of v0 in that location. there is no offset with ($a0), is that assumed to be 0 like in 0($a0)? Why only the " " empty space is replaced with v0, and why not the "\n" get replaced? or It may also have been the case that both the empty space and \n character get replced by v0. Secondly when we load the address of key in a0, the previous address should be overwritten. a0 should have contained the address of key only, but from comment it seems that the two strings are concatenated. How does that happen.

    Read the article

  • modified closure warning in ReSharper

    - by Sarah Vessels
    I was hoping someone could explain to me what bad thing could happen in this code, which causes ReSharper to give an 'Access to modified closure' warning: bool result = true; foreach (string key in keys.TakeWhile(key => result)) { result = result && ContainsKey(key); } return result; Even if the code above seems safe, what bad things could happen in other 'modified closure' instances? I often see this warning as a result of using LINQ queries, and I tend to ignore it because I don't know what could go wrong. ReSharper tries to fix the problem by making a second variable that seems pointless to me, e.g. it changes the foreach line above to: bool result1 = result; foreach (string key in keys.TakeWhile(key => result1)) Update: on a side note, apparently that whole chunk of code can be converted to the following statement, which causes no modified closure warnings: return keys.Aggregate( true, (current, key) => current && ContainsKey(key) );

    Read the article

  • parsing python to csv

    - by user185955
    I'm trying to download some game stats to do some analysis, only problem is each season the data their isn't 100% consistent. I grab the json file from the site, then wish to save it to a csv with the first line in the csv containing the heading for that column, so the heading would be essentially the key from the python data type. #!/usr/bin/env python import requests import json import csv base_url = 'http://www.afl.com.au/api/cfs/afl/' token_url = base_url + 'WMCTok' player_url = base_url + 'matchItems/round' def printPretty(data): print(json.dumps(data, sort_keys=True, indent=2, separators=(',', ': '))) session = requests.Session() # session makes it simple to use the token across the requests token = session.post(token_url).json()['token'] # get the token session.headers.update({'X-media-mis-token': token}) # set the token Season = 2014 Roundno = 4 if Roundno<10: strRoundno = '0'+str(Roundno) else: strRoundno = str(Roundno) # get some data (could easily be a for loop, might want to put in a delay using Sleep so that you don't get IP blocked) data = session.get(player_url + '/CD_R'+str(Season)+'014'+strRoundno) # print everything printPretty(data.json()) with open('stats_game_test.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter="'",quotechar='|', quoting=csv.QUOTE_ALL) for profile in data.json()['items']: spamwriter.writerow(['%s' %(profile)]) #for key in data.json().keys(): # print("key: %s , value: %s" % (key, data.json()[key])) The above code grabs the json and writes it to a csv, but it puts the key in each individual cell next to the value (eg 'venueId': 'CD_V190'), the key needs to be just across the first row as a heading. It gives me a csv file with data in the cells like this Column A B 'tempInCelsius': 17.0 'totalScore': 32 'tempInCelsius': 16.0 'totalScore': 28 What I want is the data like this tempInCelsius totalScore 17 32 16 28 As I mentioned up the top, the data isn't always consistent so if I define what fields to grab with spamwriter.writerow([profile['tempInCelsius'], profile['totalScore']]) then it will error out on certain data grabs. This is why I'm now trying the above method so it just grabs everything regardless of what data is there.

    Read the article

  • Can a Java HashMap's size() be out of sync with its actual entries' size ?

    - by trix
    I have a Java HashMap called statusCountMap. Calling size() results in 30. But if I count the entries manually, it's 31 This is in one of my TestNG unit tests. These results below are from Eclipse's Display window (type code - highlight - hit Display Result of Evaluating Selected Text). statusCountMap.size() (int) 30 statusCountMap.keySet().size() (int) 30 statusCountMap.values().size() (int) 30 statusCountMap (java.util.HashMap) {40534-INACTIVE=2, 40526-INACTIVE=1, 40528-INACTIVE=1, 40492-INACTIVE=3, 40492-TOTAL=4, 40513-TOTAL=6, 40532-DRAFT=4, 40524-TOTAL=7, 40526-DRAFT=2, 40528-ACTIVE=1, 40524-DRAFT=2, 40515-ACTIVE=1, 40513-DRAFT=4, 40534-DRAFT=1, 40514-TOTAL=3, 40529-DRAFT=4, 40515-TOTAL=3, 40492-ACTIVE=1, 40528-TOTAL=4, 40514-DRAFT=2, 40526-TOTAL=3, 40524-INACTIVE=2, 40515-DRAFT=2, 40514-ACTIVE=1, 40534-TOTAL=3, 40513-ACTIVE=2, 40528-DRAFT=2, 40532-TOTAL=4, 40524-ACTIVE=3, 40529-ACTIVE=1, 40529-TOTAL=5} statusCountMap.entrySet().size() (int) 30 What gives ? Anyone has experienced this ? I'm pretty sure statusCountMap is not being modified at this point. There are 2 methods (lets call them methodA and methodB) that modify statusCountMap concurrently, by repeatedly calling incrementCountInMap. private void incrementCountInMap(Map map, Long id, String qualifier) { String key = id + "-" + qualifier; if (map.get(key) == null) { map.put(key, 0); } synchronized (map) { map.put(key, map.get(key).intValue() + 1); } } methodD is where I'm getting the issue. methodD has a TestNG @dependsOnMethods = { "methodA", "methodB" } so when methodD is executing, statusCountMap is pretty much static already. I'm mentioning this because it might be a bug in TestNG. I'm using Sun JDK 1.6.0_24. TestNG is testng-5.9-jdk15.jar Hmmm ... after rereading my post, could it be because of concurrent execution of outside-of-synchronized-block map.get(key) == null & map.put(key,0) that's causing this issue ?

    Read the article

  • Need Explanation of couchdb reduce function

    - by Alan
    From http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views The couchdb reduce function is defined as function (key, values, rereduce) { return sum(values); } key will be an array whose elements are arrays of the form [key,id] values will be an array of the values emitted for the respective elements in keys i.e. reduce([ [key1,id1], [key2,id2], [key3,id3] ], [value1,value2,value3], false) I am having trouble understanding when/why the array of keys would contain different key values. If the array of keys does contain different key values, how would I deal with it? As an example, assume that my database contains movements between accounts of the form. {"amount":100, "CreditAccount":"account_number", "DebitAccount":"account_number"} I want a view that gives the balance of an account. My map function does: emit( doc.CreditAccount, doc.amount ) emit( doc.DebitAccount, -doc.amount ) My reduce function does: return sum(values); I seem to get the expected results, however I can't reconcile this with the possibility that my reduce function gets different key values. Is my reduce function supposed to group key values first? What kind of result would I return in that case?

    Read the article

  • MySQL Multiple Table Join

    - by hitman001
    I have a 3 tables that I'm trying to join and get distinct results. CREATE TABLE `car` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (`id`) ) ENGINE=InnoDB mysql> select * from car; +----+-------+ | id | name | +----+-------+ | 1 | acura | +----+-------+ CREATE TABLE `tires` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `tire_desc` varchar(255) DEFAULT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new_fk_constraint` (`car_id`), CONSTRAINT `new_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from tires; +----+-------------+--------+ | id | tire_desc | car_id | +----+-------------+--------+ | 1 | front_right | 1 | | 2 | front_left | 1 | +----+-------------+--------+ CREATE TABLE `lights` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `lights_desc` varchar(255) NOT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new1_fk_constraint` (`car_id`), CONSTRAINT `new1_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from lights; +----+-------------+--------+ | id | lights_desc | car_id | +----+-------------+--------+ | 1 | right_light | 1 | | 2 | left_light | 1 | +----+-------------+--------+ Here is my query. mysql> SELECT name, group_concat(tire_desc), group_concat(lights_desc) FROM car left join tires on car.id = tires.car_id left join lights on car.id = car_id group by car.id; +-------+-----------------------------------------------+-----------------------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+-----------------------------------------------+ | acura | front_right,front_right,front_left,front_left | right_light,left_light,right_light,left_light | +-------+-----------------------------------------------+-----------------------------------------------+ I get duplicate entires and this is what I would like to get. +-------+-----------------------------------------------+--------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+--------------------------------+ | acura | front_right,front_left | right_light,left_light | +-------+-----------------------------------------------+--------------------------------+ I cannot use distinct in group_concat because I might have legitimate duplicates which I would like to keep. Is there any way to do this query using joins and not using inner selects like the statement below? SELECT name, (select group_concat(tire_desc) from tires where car.id = tires.car_id), (select group_concat(lights_desc) from lights where car.id = lights.car_id) FROM car Also, if I will use inner selects, will there be any performance issues over joins?

    Read the article

  • memcached: which is faster, doing an add (and checking result), or doing a get (and set when returni

    - by Mike Sherov
    The title of this question isn't so clear, but the code and question is straightforward. Let's say I want to show my users an ad once per day. To accomplish this, every time they visit a page on my site, I check to see if a certain memcache key has any data stored on it. If so, don't show an ad. If not, store the value '1' in that key with an expiration of 86400. I can do this 2 ways: //version a $key='OPD_'.date('Ymd').'_'.$type.'_'.$user; if($memcache->get($key)===false){ $memcache->set($key,'1',false,$expire); //show ad } //version b $key='OPD_'.date('Ymd').'_'.$type.'_'.$user; if($memcache->add($key,'1',false,$expire)){ //show ad } Now, it might seem obvious that b is better, it always makes 1 memcache call. However, what is the overhead of "add" vs. "get"? These aren't the real comparisons... and I just made up these numbers, but let's say 1 add ~= 1 set ~= 5 get in terms of effort, and the average user views 5 pages a day: a: (5 get * 1 effort) + (1 set * 5 effort) = 10 units of effort b: (5 add * 5 effort) = 25 units of effort Would it make sense to always do the add call? Is this an unnecessary micro-optimization?

    Read the article

  • comparision of strings

    - by EmiLazE
    i am writing a program, that simulates game mastermind. but i am struggling on how to compare guessed pattern to key pattern. the game conditions are a little bit changed: patterns consist of letters. if an element of guessed pattern is equal to element of key pattern, and also index is equal, then print b. if an element of guessed pattern is equal to element of key pattern, but index is not, then print w. if an element of guessed pattern is not equal to element of key pattern, print dot. in feedback about guessed pattern, 'b's come first, 'w's second, '.'s last. my problem is that i cannot think of a way totally satisfies the answer. for (i=0; i<patternlength; i++) { for (x=0; x<patternlength; x++) { if (guess[i]==key[x] && i==x) printf("b"); if (guess[i]==key[x] && i!=x) printf("w"); if (guess[i]!=key[x]) printf("."); } }

    Read the article

  • jQuery and Windows Azure

    - by Stephen Walther
    The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail. Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records. There are five steps that you must complete to host the jQuery application: Sign up for Windows Azure Create a Hosted Service Install the Windows Azure Tools for Visual Studio Create a Windows Azure Cloud Service Deploy the Cloud Service Sign Up for Windows Azure Go to http://www.microsoft.com/windowsazure/ and click the Sign up Now button. Select one of the offers. I selected the Introductory Special offer because it is free and I just wanted to experiment with Windows Azure for the purposes of this blog entry.     To sign up, you will need a Windows Live ID and you will need to enter a credit card number. After you finish the sign up process, you will receive an email that explains how to activate your account. Accessing the Developer Portal After you create your account and your account is activated, you can access the Windows Azure developer portal by visiting the following URL: http://windows.azure.com/ When you first visit the developer portal, you will see the one project that you created when you set up your Windows Azure account (In a fit of creativity, I named my project StephenWalther).     Creating a New Windows Azure Hosted Service Before you can host an application in the cloud, you must first add a hosted service to your project. Click your project on the summary page and click the New Service link. You are presented with the option of creating either a new Storage Account or a new Hosted Services.     Because we have code that we want to run in the cloud – the WCF Service -- we want to select the Hosted Services option. After you select this option, you must provide a name and description for your service. This information is used on the developer portal so you can distinguish your services.     When you create a new hosted service, you must enter a unique name for your service (I selected jQueryApp) and you must select a region for this service (I selected Anywhere US). Click the Create button to create the new hosted service.   Install the Windows Azure Tools for Visual Studio We’ll use Visual Studio to create our jQuery project. Before you can use Visual Studio with Windows Azure, you must first install the Windows Azure Tools for Visual Studio. Go to http://www.microsoft.com/windowsazure/ and click the Get Tools and SDK button. The Windows Azure Tools for Visual Studio works with both Visual Studio 2008 and Visual Studio 2010.   Installation of the Windows Azure Tools for Visual Studio is painless. You just need to check some agreement checkboxes and click the Next button a few times and installation will begin:   Creating a Windows Azure Application After you install the Windows Azure Tools for Visual Studio, you can choose to create a Windows Azure Cloud Service by selecting the menu option File, New Project and selecting the Windows Azure Cloud Service project template. I named my new Cloud Service with the name jQueryApp.     Next, you need to select the type of Cloud Service project that you want to create from the New Cloud Service Project dialog.   I selected the C# ASP.NET Web Role option. Alternatively, I could have picked the ASP.NET MVC 2 Web Role option if I wanted to use jQuery with ASP.NET MVC or even the CGI Web Role option if I wanted to use jQuery with PHP. After you complete these steps, you end up with two projects in your Visual Studio solution. The project named WebRole1 represents your ASP.NET application and we will use this project to create our jQuery application. Creating the jQuery Application in the Cloud We are now ready to create the jQuery application. We’ll create a super simple application that displays a list of records retrieved from a WCF service (hosted in the cloud). Create a new page in the WebRole1 project named Default.htm and add the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> var products = [ {name:"Milk", price:4.55}, {name:"Yogurt", price:2.99}, {name:"Steak", price:23.44} ]; $("#productTemplate").render(products).appendTo("#productContainer"); </script> </body> </html> The jQuery code in this page simply displays a list of products by using a template. I am using a jQuery template to format each product. You can learn more about using jQuery templates by reading the following blog entry by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2010/05/07/jquery-templates-and-data-linking-and-microsoft-contributing-to-jquery.aspx You can test whether the Default.htm page is working correctly by running your application (hit the F5 key). The first time that you run your application, a database is set up on your local machine to simulate cloud storage. You will see the following dialog: If the Default.htm page works as expected, you should see the list of three products: Adding an Ajax-Enabled WCF Service In the previous section, we created a simple jQuery application that displays an array by using a template. The application is a little too simple because the data is static. In this section, we’ll modify the page so that the data is retrieved from a WCF service instead of an array. First, we need to add a new Ajax-enabled WCF Service to the WebRole1 project. Select the menu option Project, Add New Item and select the Ajax-enabled WCF Service project item. Name the new service ProductService.svc. Modify the service so that it returns a static collection of products. The final code for the ProductService.svc should look like this: using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Activation; namespace WebRole1 { public class Product { public string name { get; set; } public decimal price { get; set; } } [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService { [OperationContract] public IList<Product> SelectProducts() { var products = new List<Product>(); products.Add(new Product {name="Milk", price=4.55m} ); products.Add(new Product { name = "Yogurt", price = 2.99m }); products.Add(new Product { name = "Steak", price = 23.44m }); return products; } } }   In real life, you would want to retrieve the list of products from storage instead of a static array. We are being lazy here. Next you need to modify the Default.htm page to use the ProductService.svc. The jQuery script in the following updated Default.htm page makes an Ajax call to the WCF service. The data retrieved from the ProductService.svc is displayed in the client template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> $.post("ProductService.svc/SelectProducts", function (results) { var products = results["d"]; $("#productTemplate").render(products).appendTo("#productContainer"); }); </script> </body> </html>   Deploying the jQuery Application to the Cloud Now that we have created our jQuery application, we are ready to deploy our application to the cloud so that the whole world can use it. Right-click your jQueryApp project in the Solution Explorer window and select the Publish menu option. When you select publish, your application and your application configuration information is packaged up into two files named jQueryApp.cspkg and ServiceConfiguration.cscfg. Visual Studio opens the directory that contains the two files. In order to deploy these files to the Windows Azure cloud, you must upload these files yourself. Return to the Windows Azure Developers Portal at the following address: http://windows.azure.com/ Select your project and select the jQueryApp service. You will see a mysterious cube. Click the Deploy button to upload your application.   Next, you need to browse to the location on your hard drive where the jQueryApp project was published and select both the packaged application and the packaged application configuration file. Supply the deployment with a name and click the Deploy button.     While your application is in the process of being deployed, you can view a progress bar.     Running the jQuery Application in the Cloud Finally, you can run your jQuery application in the cloud by clicking the Run button.   It might take several minutes for your application to initialize (go grab a coffee). After WebRole1 finishes initializing, you can navigate to the following URL to view your live jQuery application in the cloud: http://jqueryapp.cloudapp.net/default.htm The page is hosted on the Windows Azure cloud and the WCF service executes every time that you request the page to retrieve the list of products. Summary Because we started from scratch, we needed to complete several steps to create and deploy our jQuery application to the Windows Azure cloud. We needed to create a Windows Azure account, create a hosted service, install the Windows Azure Tools for Visual Studio, create the jQuery application, and deploy it to the cloud. Now that we have finished this process once, modifying our existing cloud application or creating a new cloud application is easy. jQuery and Windows Azure work nicely together. We can take advantage of jQuery to build applications that run in the browser and we can take advantage of Windows Azure to host the backend services required by our jQuery application. The big benefit of Windows Azure is that it enables us to scale. If, all of the sudden, our jQuery application explodes in popularity, Windows Azure enables us to easily scale up to meet the demand. We can handle anything that the Internet might throw at us.

    Read the article

  • OpenGL basics: calling glDrawElements once per object

    - by Bethor
    Hi all, continuing on from my explorations of the basics of OpenGL (see this question), I'm trying to figure out the basic principles of drawing a scene with OpenGL. I am trying to render a simple cube repeated n times in every direction. My method appears to yield terrible performance : 1000 cubes brings performance below 50fps (on a QuadroFX 1800, roughly a GeForce 9600GT). My method for drawing these cubes is as follows: done once: set up a vertex buffer and array buffer containing my cube vertices in model space set up an array buffer indexing the cube for drawing as 12 triangles done for each frame: update uniform values used by the vertex shader to move all cubes at once done for each cube, for each frame: update uniform values used by the vertex shader to move each cube to its position call glDrawElements to draw the positioned cube Is this a sane method ? If not, how does one go about something like this ? I'm guessing I need to minimize calls to glUniform, glDrawElements, or both, but I'm not sure how to do that. Full code for my little test : (depends on gletools and pyglet) I'm aware that my init code (at least) is really ugly; I'm concerned with the rendering code for each frame right now, I'll move to something a little less insane for the creation of the vertex buffers and such later on. import pyglet from pyglet.gl import * from pyglet.window import key from numpy import deg2rad, tan from gletools import ShaderProgram, FragmentShader, VertexShader, GeometryShader vertexData = [-0.5, -0.5, -0.5, 1.0, -0.5, 0.5, -0.5, 1.0, 0.5, -0.5, -0.5, 1.0, 0.5, 0.5, -0.5, 1.0, -0.5, -0.5, 0.5, 1.0, -0.5, 0.5, 0.5, 1.0, 0.5, -0.5, 0.5, 1.0, 0.5, 0.5, 0.5, 1.0] elementArray = [2, 1, 0, 1, 2, 3,## back face 4, 7, 6, 4, 5, 7,## front face 1, 3, 5, 3, 7, 5,## top face 2, 0, 4, 2, 4, 6,## bottom face 1, 5, 4, 0, 1, 4,## left face 6, 7, 3, 6, 3, 2]## right face def toGLArray(input): return (GLfloat*len(input))(*input) def toGLushortArray(input): return (GLushort*len(input))(*input) def initPerspectiveMatrix(aspectRatio = 1.0, fov = 45): frustumScale = 1.0 / tan(deg2rad(fov) / 2.0) fzNear = 0.5 fzFar = 300.0 perspectiveMatrix = [frustumScale*aspectRatio, 0.0 , 0.0 , 0.0 , 0.0 , frustumScale, 0.0 , 0.0 , 0.0 , 0.0 , (fzFar+fzNear)/(fzNear-fzFar) , -1.0, 0.0 , 0.0 , (2*fzFar*fzNear)/(fzNear-fzFar), 0.0 ] return perspectiveMatrix class ModelObject(object): vbo = GLuint() vao = GLuint() eao = GLuint() initDone = False verticesPool = [] indexPool = [] def __init__(self, vertices, indexing): super(ModelObject, self).__init__() if not ModelObject.initDone: glGenVertexArrays(1, ModelObject.vao) glGenBuffers(1, ModelObject.vbo) glGenBuffers(1, ModelObject.eao) glBindVertexArray(ModelObject.vao) initDone = True self.numIndices = len(indexing) self.offsetIntoVerticesPool = len(ModelObject.verticesPool) ModelObject.verticesPool.extend(vertices) self.offsetIntoElementArray = len(ModelObject.indexPool) ModelObject.indexPool.extend(indexing) glBindBuffer(GL_ARRAY_BUFFER, ModelObject.vbo) glEnableVertexAttribArray(0) #position glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ModelObject.eao) glBufferData(GL_ARRAY_BUFFER, len(ModelObject.verticesPool)*4, toGLArray(ModelObject.verticesPool), GL_STREAM_DRAW) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(ModelObject.indexPool)*2, toGLushortArray(ModelObject.indexPool), GL_STREAM_DRAW) def draw(self): glDrawElements(GL_TRIANGLES, self.numIndices, GL_UNSIGNED_SHORT, self.offsetIntoElementArray) class PositionedObject(object): def __init__(self, mesh, pos, objOffsetUf): super(PositionedObject, self).__init__() self.mesh = mesh self.pos = pos self.objOffsetUf = objOffsetUf def draw(self): glUniform3f(self.objOffsetUf, self.pos[0], self.pos[1], self.pos[2]) self.mesh.draw() w = 800 h = 600 AR = float(h)/float(w) window = pyglet.window.Window(width=w, height=h, vsync=False) window.set_exclusive_mouse(True) pyglet.clock.set_fps_limit(None) ## input forward = [False] left = [False] back = [False] right = [False] up = [False] down = [False] inputs = {key.Z: forward, key.Q: left, key.S: back, key.D: right, key.UP: forward, key.LEFT: left, key.DOWN: back, key.RIGHT: right, key.PAGEUP: up, key.PAGEDOWN: down} ## camera camX = 0.0 camY = 0.0 camZ = -1.0 def simulate(delta): global camZ, camX, camY scale = 10.0 move = scale*delta if forward[0]: camZ += move if back[0]: camZ += -move if left[0]: camX += move if right[0]: camX += -move if up[0]: camY += move if down[0]: camY += -move pyglet.clock.schedule(simulate) @window.event def on_key_press(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = True @window.event def on_key_release(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = False ## uniforms for shaders camOffsetUf = GLuint() objOffsetUf = GLuint() perspectiveMatrixUf = GLuint() camRotationUf = GLuint() program = ShaderProgram( VertexShader(''' #version 330 layout(location = 0) in vec4 objCoord; uniform vec3 objOffset; uniform vec3 cameraOffset; uniform mat4 perspMx; void main() { mat4 translateCamera = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, cameraOffset.x, cameraOffset.y, cameraOffset.z, 1.0f); mat4 translateObject = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, objOffset.x, objOffset.y, objOffset.z, 1.0f); vec4 modelCoord = objCoord; vec4 positionedModel = translateObject*modelCoord; vec4 cameraPos = translateCamera*positionedModel; gl_Position = perspMx * cameraPos; }'''), FragmentShader(''' #version 330 out vec4 outputColor; const vec4 fillColor = vec4(1.0f, 1.0f, 1.0f, 1.0f); void main() { outputColor = fillColor; }''') ) shapes = [] def init(): global camOffsetUf, objOffsetUf with program: camOffsetUf = glGetUniformLocation(program.id, "cameraOffset") objOffsetUf = glGetUniformLocation(program.id, "objOffset") perspectiveMatrixUf = glGetUniformLocation(program.id, "perspMx") glUniformMatrix4fv(perspectiveMatrixUf, 1, GL_FALSE, toGLArray(initPerspectiveMatrix(AR))) obj = ModelObject(vertexData, elementArray) nb = 20 for i in range(nb): for j in range(nb): for k in range(nb): shapes.append(PositionedObject(obj, (float(i*2), float(j*2), float(k*2)), objOffsetUf)) glEnable(GL_CULL_FACE) glCullFace(GL_BACK) glFrontFace(GL_CW) glEnable(GL_DEPTH_TEST) glDepthMask(GL_TRUE) glDepthFunc(GL_LEQUAL) glDepthRange(0.0, 1.0) glClearDepth(1.0) def update(dt): print pyglet.clock.get_fps() pyglet.clock.schedule_interval(update, 1.0) @window.event def on_draw(): with program: pyglet.clock.tick() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glUniform3f(camOffsetUf, camX, camY, camZ) for shape in shapes: shape.draw() init() pyglet.app.run()

    Read the article

  • How do I add "Press any key to boot from usb" when installing Windows from a flash drive? (Grub4dos question / how to remove a bootloader)

    - by Vincent
    Hi there! I've been struggling with this problem for a while now and finially decided to ask for help. Let me first explain what the main purpose of the app is: to provide the a very easy to use way of backing up files, after which I format the drive and start Windows 7 setup. I do this by booting WinPE, which runs a script to detect Windows installations and then opens a file browser. After the file browser is closed, the script continues and formats the drive that contains the Windows installation, and starts an unattended Windows 7 install. Now here is the problem: When you start Windows setup or WinPE from a dvd, you get a nice option to "Press any key to boot from DVD". This is to prevent the computer from booting the DVD when the first phase of the installation is complete and the computer reboots. However, when booting from a flash drive, Windows does not provide this option: it simply boots the flash drive every reboot. To replicate the "press any key" function, I installed Grub4Dos, which works great. It provides a small menu, the first standard item being "Continue installation", the second being "start installation". After quite a lot of tweaking, I got everything working: Start installation starts WinPE, which in turn starts the Windows installation. At first reboot, the Grub4Dos menu comes up, counts 5 seconds and boots the second stage of the installation. Here, I am greeted with the error: "Windows setup could not configure windows to run on this computer's hardware." When I boot into WinPE the normal way (put the bootmgr on the stick root) and change my bios to boot from the primary hdd after first reboot, I don't get this error. I've been looking around, and the only thing I could find was that the BIOS automatically names the boot device hd0, and that Windows can only be run / installed to hd 0. I'm not sure if this is the problem. I read about remapping to solve this problem, but to do that you have to know the phisical location of the hard drive and partition, like hd(0,1). I want this flash drive to work on any PC, regardless of where the OS is installed, so that's not really a possibility. A possible fix I thought of is removing the bootloader from the flash drive when I'm in WinPE. That way, when the pc reboots the BIOS will not see the flash drive as a boot drive and instead boot the primary hdd. I have yet to find a way to do this. Thank you for reading my question, and if you have any suggestion, please do.

    Read the article

  • SChannel "cannot find certificate in either LocalMachine or CurrentUser store"

    - by Chris J
    We have an in-house application that requires the use of client SSL certificates to authenticate with a remote server (not under our control). This has worked without problems before but on deploying to a new server, we're having problems getting Windows 2008 to use the certificate. The certificate exists as a .pfx file that contains a private key. The same certificate exists in the LocalMachine store, again with its private key. We've ensured the one in the LocalMachine store is correct by creating a website in IIS against that certificate, so we're happy that the certificate, certificate chain, and private key is valid. The PFX has been created by exporting from the Certificates MMC snap-in. The issue is that we get the following in the system diagnostic logs that suggests it can't find the private key: System.Net Information: 0 : [5988] SecureChannel#23264094 – Locating the private key for the certificate: [Subject] CN=internal-server.company.com, OU=Servers, OU=Devices, O=org [Issuer] CN=SubCA02, OU=CA, o=org [Serial Number] 407ABCDE [Not Before] 31/10/2013 11:08:48 AM [Not After] 31/10/2016 11:08:48 AM [Thumbprint] 4354A34F6004F019E60F055979A47E50F62D1504 . System.Net Information: 0 : [5988] SecureChannel#23264094 – Cannot find the certificate in either the LocalMachine store or the CurrentUser store. I've validated the thumbprint, issuer and serial number listed in the log with the certificate in the LocalMachine store and these marry up. From what I can tell with much searching, this appears to be a permissions issue. The user the application is running as has been granted access to the private key (Personal Certificates - right click on the certificate - all tasks - Manage Private Keys), so I'm now at a loss as to which permission(s) it may be that is causing the issue.

    Read the article

  • Cant connect to mysql using self signed SSL certificate

    - by carpii
    After creating a self-signed SSL certificate, I have configured my remote mysqld to use them (and ssl is enabled) I ssh into my remote server, and try connecting to its own mysqld using ssl (mysql server is 5.5.25).. ~> mysql -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error: error:00000001:lib(0):func(0):reason(1) Ok, I remember reading theres some problem with connecting to the same server via SSL. So I download the client keys down to my local box, and test from there... ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error Its unclear what this "SSL connection error" error refers to, but if I omit the -ssl-ca, then I am able to connect using SSL.. ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 37 Server version: 5.5.25 MySQL Community Server (GPL) However, I believe that this is only encrypting the connection, and not actually verifying the validity of the cert (meaning I would be potentially vulnerable to man-in-middle attack) The ssl certs are valid (albeit self signed), and do not have a passphrase on them So my question is, what am I doing wrong? How can I connect via SSL, using a self signed certificate? MySQL Server version is 5.5.25 and the server and clients are Centos 5 Thanks for any advice Edit: Note that in all cases, the command is being issued from the same directory where the ssl keys reside (hence no absolute path)

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >