Search Results

Search found 19390 results on 776 pages for 'key bindings'.

Page 141/776 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Why this function overloading is not working?

    - by Jack
    class CConfFile { public: CConfFile(const std::string &FileName); ~CConfFile(); ... std::string GetString(const std::string &Section, const std::string &Key); void GetString(const std::string &Section, const std::string &Key, char *Buffer, unsigned int BufferSize); ... } string CConfFile::GetString(const string &Section, const string &Key) { return GetKeyValue(Section, Key); } void GetString(const string &Section, const string &Key, char *Buffer, unsigned int BufferSize) { string Str = GetString(Section, Key); // *** ERROR *** strncpy(Buffer, Str.c_str(), Str.size()); } Why do I get an error too few arguments to function ‘void GetString(const std::string&, const std::string&, char*, unsigned int)' at the second function ? Thanks

    Read the article

  • How to delete a Dictionary row that is a Double by using an Int?

    - by Richard Reddy
    Hi, I have a Dictionary object that is formed using a double as its key values. It looks like this: Dictionary<double, ClassName> VariableName = new Dictionary<double, ClassName>(); For my project I have to have the key as the double as I require values like 1.1,1.2,2.1,2.2,etc in my system. Everything in my system works great except when I want to delete all the keys in a group eg all the 1 values would be 1.1,1.2, etc. I can delete rows if I know the full value of the key eg 1.1 but in my system I will only know the whole number. I tried to do the following but get an error: DictionaryVariable.Remove(j => Convert.ToInt16(j.Key) == rowToEdit).OrderByDescending(j => j.Key); Is there anyway to remove all rows per int value by converting the key? Thanks, Rich

    Read the article

  • Select rows where column LIKE dictionary word

    - by Gerve
    I have 2 tables: Dictionary - Contains roughly 36,000 words CREATE TABLE IF NOT EXISTS `dictionary` ( `word` varchar(255) NOT NULL, PRIMARY KEY (`word`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Datas - Contains roughly 100,000 rows CREATE TABLE IF NOT EXISTS `datas` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `hash` varchar(32) NOT NULL, `data` varchar(255) NOT NULL, `length` int(11) NOT NULL, `time` int(11) NOT NULL, PRIMARY KEY (`ID`), UNIQUE KEY `hash` (`hash`), KEY `data` (`data`), KEY `length` (`length`), KEY `time` (`time`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=105316 ; I would like to somehow select all the rows from datas where the column data contains 1 or more words. I understand this is a big ask, it would need to match all of these rows together in every combination possible, so it needs the best optimization. I have tried the below query, but it just hangs for ages: SELECT `datas`.*, `dictionary`.`word` FROM `datas`, `dictionary` WHERE `datas`.`data` LIKE CONCAT('%', `dictionary`.`word`, '%') AND LENGTH(`dictionary`.`word`) > 3 ORDER BY `length` ASC LIMIT 15 I have also tried something similar to the above with a left join, and on clause that specified the like statement.

    Read the article

  • How to reverse a dictionary that it has repeated values (python)

    - by Galois
    Hi guys! So, I have a dictionary with almost 100,000 (key, values) pairs and the majority of the keys map to the same values. For example imagine something like that: dict = {'a': 1, 'c': 2, 'b': 1, 'e': 2, 'd': 3, 'h': 1, 'j': 3} What I want to do, is to reverse the dictionary so that each value in dict is going to be a key at the reverse_dict and is going to map to a list of all the dict.keys that used to map to that value at the dict. So based on the example above I would get: reversed_dict = {1: ['a', 'b', 'h'], 2:['e', 'c'] , 3:['d', 'j']} I came up with a solution that is very expensive and I would really want to hear any ideas more efficient than mine. my expensive solution: reversed_dict = {} for value in dict.values(): reversed_dict[value] = [] for key in dict.keys(): if dict[key] == value: if key not in reversed_dict[value]: reversed_dict[value].append(key) Output >> reversed_dict = {1: ['a', 'b', 'h'], 2: ['c', 'e'], 3: ['d', 'j']} I would really appreciate to hear any ideas better and more efficient than than mine. Thanks!

    Read the article

  • How to get google app engine logs in C#?

    - by Max
    I am trying to retrieve app engine logs the only result I get is "# next_offset=None", below is my code: internal string GetLogs() { string result = _connection.Get("/api/request_logs", GetPostParameters(null)); return result; } private Dictionary<string, string> GetPostParameters(Dictionary<string, string> customParameters) { Dictionary<string, string> parameters = new Dictionary<string, string>() { { "app_id", _settings.AppId }, { "version", _settings.Version.ToString() } }; if (customParameters != null) { foreach (string key in customParameters.Keys) { if (parameters.ContainsKey(key)) { parameters[key] = customParameters[key]; } else { parameters.Add(key, customParameters[key]); } } } return parameters; }

    Read the article

  • parse unformatted string into dictionary with python

    - by user553131
    I have following string. DATE: 12242010Key Type: Nod32 Anti-Vir (30d trial) Key: a5B2s-sH12B-hgtY3-io87N-srg98-KLMNO I need to create dictionary so it would be like { "DATE": "12242010", "Key Type": "Nod32 Anti-Vir (30d trial)", "Key": "a5B2s-sH12B-hgtY3-io87N-srg98-KLMNO" } The problem is that string is unformatted DATE: 12242010Key Type: Nod32 Anti-Vir (30d trial) there is no space after Date before Key Type also it would be nice to have some validation for Key, eg if there are 5 chars in each box of key and number of boxes I am a beginner in python and moreover in regular expressions. Thanks a lot.

    Read the article

  • Is it possible to append html code in parts via JQuery?

    - by phpheini
    I am trying to append li elements coming from a php file with JQuery. Problem is that the html code needs to be seperately appended to different html IDs according to the key value. Unfortunately as I understood append() can only append correct html with all elements closed. Otherwise it will automatically close the tags. The following code will NOT work as dval contains code like <div><li class="some">Some value</li> and append() will make <div><li class="some">Some value</li></div> out of it. So I was wondering whether there is another way, maybe a function other than append() to be able to append html parts? $.each(obj, function(key,val) { $.each(obj[key], function(key, dval) { if(key == "text") { $("#" + key).append(dval); } }) });

    Read the article

  • Fast (twice in <1s) pressing of the same key on keyboard is not recognized correctly. What can it be?

    - by aldo85ita
    If I press any button 2 times quickly (I mean with less one second of delay between the keystrokes), Ubuntu doesn't detect the second one. In particularly, Ubuntu seems to detect the pressure because when I push the backspace, I can listen the sound related to the beating, but it has no effect (the letter is not inserted in the text or deleted in backspace case). How to fix it? Note: I used Ubuntu 11.10

    Read the article

  • Why is casting and comparing in PHP faster than is_*?

    - by tstenner
    While optimizing a function in PHP, I changed if(is_array($obj)) foreach($obj as $key=$value { [snip] } else if(is_object($obj)) foreach($obj as $key=$value { [snip] } to if($obj == (array) $obj) foreach($obj as $key=$value { [snip] } else if($obj == (obj) $obj) foreach($obj as $key=$value { [snip] } After learning about ===, I changed that to if($obj === (array) $obj) foreach($obj as $key=$value { [snip] } else if($obj === (obj) $obj) foreach($obj as $key=$value { [snip] } Changing each test from is_* to casting resulted in a major speedup (30%). I understand that === is faster than == as no coercion has to be done, but why is casting the variable so much faster than calling any of the is_*-functions?

    Read the article

  • How to deal with 2 almost identical tables

    - by jgritty
    I have a table of baseball stats, something like this: CREATE TABLE batting_stats( ab INTEGER, pa INTEGER, r INTEGER, h INTEGER, hr INTEGER, rbi INTEGER, playerID INTEGER, FOREIGN KEY(playerID) REFERENCES player(playerID) ); But then I have a table of stats that are basically exactly the same, but for a team: CREATE TABLE team_batting_stats( ab INTEGER, pa INTEGER, r INTEGER, h INTEGER, hr INTEGER, rbi INTEGER, teamID INTEGER, FOREIGN KEY(teamID) REFERENCES team(teamID) ); My first instinct is to scrap the Foreign key and generalize the ID, but I still have a problem, I have these 2 tables, and they can't have overlapping IDs: CREATE TABLE player( playerID INTEGER PRIMARY KEY, firstname TEXT, lastname TEXT, number INTEGER, teamID INTEGER, FOREIGN KEY(teamID) REFERENCES team(teamID) ); CREATE TABLE team( teamID INTEGER PRIMARY KEY, name TEXT, city TEXT, ); I feel like I'm overlooking something obvious that could solve this problem and reduce stats to a single table.

    Read the article

  • Grub loading. The symbol ' ' not found. Aborted. Press any key...

    - by John
    Hi there, I have a dual boot system on dell xps 9000 with windows 7 and ubuntu. But after I performed system backup on it as requested by windows 7 I am no longer able to boot into the computer, instead at the beginning after bios I get the following message: Grub loading. The symbol ' ' not found. Aborted. Press any key... I tried to change bios booting config to starting with harddrive and it still returned the same message. Using windows boot disk only asks me to do another system backup or threatens to delete my harddrive completely. The only solution I have so far is to reinstall ubuntu, but that leaves 2 additional copies of ubuntu on my computer. Is there a simpler way to fix the situation so I can actually boot into windows? Thanks so much.

    Read the article

  • Setting up a Carousel Component in ADF Mobile

    - by Shay Shmeltzer
    The Carousel component is one of the slickier ways of showing collections of data, and on a mobile device it works really great with the finger swipe gesture. Using the Carousel component in ADF Mobile is similar to using it in regular web ADF applications, with one major change - right now you can't drag a collection from the data control palette and drop it as a carousel. So here is a quick work around for that, and details about setting up carousels in your application. First thing you'll need is a data control that returns an array of records. In my demo I'm using the Emps collection that you can get from following this tutorial. Then you drag the emps and drop it in your amx page as an ADF mobile iterator. We are doing this as a short cut to getting the right binding needed for a carousel in our page. If you look now in your page's binding you'll see something like this: You can now remark the whole iterator code in your page's source. Next let's add the carousel From the component palette drag the carousel (from the data view category) to the page. Next drag a carousel item and drop it in the nodestamp facet of the carousel. Now we'll hook up the carousel to the binding we got from the iterator - this is quite simple just copy the var and value attributes from the iterator tag to the carousel tag: var="row" value="#{bindings.emps.collectionModel}" Next drop a panelForm, or another layout panel in to the carousel item. Into that panelForm you can now drop items and bind their value property to row.attributeNames - basically copying the way it is in the fields in the iterator for example: value="#{row.hireDate}". By the way you can also copy other attributes like the label. And that's it. Your code should end up looking something like this:     <amx:carousel id="c1" var="row" value="#{bindings.emps.collectionModel}">      <amx:facet name="nodeStamp">        <amx:carouselItem id="ci1">          <amx:panelFormLayout id="pfl1">            <amx:inputText label="#{bindings.emps.hints.salary.label}" value="#{row.salary}" id="it1"/>            <amx:inputText label="#{bindings.emps.hints.name.label}" value="#{row.name}" id="it2"/>          </amx:panelFormLayout>        </amx:carouselItem>      </amx:facet>    </amx:carousel> And when you run your application it will look like this:

    Read the article

  • BIND DNS Master with Zerigo Slaves - BIND won't update the slave servers

    - by Anthony
    I've tried to resolve this myself and have looked through Google and Stack but haven't found the answer I'm looking for. Currently on a VPS server I have BIND DNS installed as a MASTER DNS Server. I use Zerigo's DNS service as SLAVE servers for public use: The Master doesn't receive queries - It's job is to simply create and modify DNS entries locally of which the SLAVE use to serve. Here is an excerpt of the BIND log, I set it to INFO event logging: 14-Apr-2012 23:00:00.234 general: info: received control channel command 'reload' 14-Apr-2012 23:00:00.234 general: info: loading configuration from 'C:\DNS\BIND\etc\named.conf' 14-Apr-2012 23:00:00.234 general: info: using default UDP/IPv4 port range: [1024, 65535] 14-Apr-2012 23:00:00.234 general: info: using default UDP/IPv6 port range: [1024, 65535] 14-Apr-2012 23:00:00.250 general: info: reloading configuration succeeded 14-Apr-2012 23:00:00.250 general: info: reloading zones succeeded 14-Apr-2012 23:16:22.750 xfer-out: info: client 174.36.24.251#47135: transfer of 'ajmakeup.com/IN': AXFR started 14-Apr-2012 23:16:22.750 xfer-out: info: client 174.36.24.251#47135: transfer of 'ajmakeup.com/IN': AXFR ended 14-Apr-2012 23:16:23.015 xfer-out: info: client 68.71.141.22#36212: transfer of 'ajmakeup.com/IN': AXFR started 14-Apr-2012 23:16:23.031 xfer-out: info: client 68.71.141.22#36212: transfer of 'ajmakeup.com/IN': AXFR ended As you can see there is no problem with Zerigo's DNS servers requesting new DNS data, when I force a reload that is; I don't believe, as per the way they are set as SLAVE, that they poll for changes. However the problem is the other way; the MASTER is not updating the SLAVE servers when reload is run (on the MASTER); it is a batch on a 15 minute timer. Below is my NAMED.CONF: key "rndc-key" { algorithm hmac-md5; secret "REMOVED FOR SECURITY"; }; acl "trusted" { 174.36.24.251/32; 68.71.141.22/32; localhost; }; options { version "not currently available"; directory "C:\DNS\BIND\etc"; allow-query { trusted; }; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; logging{ channel simple_log { file "C:\DNS\BIND\logging\bind.log" versions 3 size 5m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; }; zone "ajmakeup.com" in { type master; file "c:\dns\BIND\zones\db.ajmakeup.com.txt"; allow-transfer { 174.36.24.251; 68.71.141.22; }; allow-update { none; }; }; Does my problem have something to do with 'allow-query' under options? You will notice that 'allow-transfer' is set explicitly on each DNS zone. In case you need it here is my RNDC.CONF: key "rndc-key" { algorithm hmac-md5; secret "REMOVED FOR SECURITY"; }; options { default-key "rndc-key"; default-server 127.0.0.1; default-port 953; }; server localhost { key "rndc-key"; }; Note: I am using WebsitePanel as my hosting panel and is such why it creates the zone enteries the way it does. Although I know I can change this behaviour, I do not wish to do so nor do I believe is the root of the problem. Thanks for your help.

    Read the article

  • WCF tcp.net client/server connection failing "Stream Security is required"

    - by Tom W.
    I am trying to test a simple WCF tcp.net client/server app. The WCF service is being hosted on Windows 7 IIS. I have enabled TCP.net in IIS. I granted liberal security privileges to service app by configuring an app pool with admin rights and set the IIS service application to run in the context. I enabled tracing on the service app to troubleshoot. Whenever I run a simple method call against the service from the WCF client app, I get the following exception: "Stream Security is required at http://www.w3.org/2005/08/addressing/anonymous, but no security context was negotiated. This is likely caused by the remote endpoint missing a StreamSecurityBindingElement from its binding." Here is my client configuration: <bindings> <netTcpBinding> <binding name="InsecureTcp"> <security mode="None" /> </binding> </netTcpBinding> </bindings> Here is my service configuration: <bindings> <netTcpBinding> <binding name="InsecureTcp" > <security mode="None" /> </binding> </netTcpBinding> </bindings> <services> <service name="OrderService" behaviorConfiguration="debugServiceBehavior"> <endpoint address="" binding="netTcpBinding" bindingConfiguration="InsecureTcp" contract="ProtoBufWcfService.IOrder" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="debugServiceBehavior"> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors>

    Read the article

  • Consume webservice from a .NET DLL - app.config problem

    - by Asaf R
    Hi, I'm building a DLL, let's call it mydll.dll, and in it I sometimes need to call methods from webservice, myservice. mydll.dll is built using C# and .NET 3.5. To consume myservice from mydll I've Added A Service in Visual Studio 2008, which is more or less the same as using svcutil.exe. Doing so creates a class I can create, and adds endpoint and bindings configurations to mydll app.config. The problem here is that mydll app.config is never loaded. Instead, what's loaded is the app.config or web.config of the program I use mydll in. I expect mydll to evolve, which is why I've decoupled it's funcionality from the rest of my system to begin with. During that evolution it will likely add more webservice to which it'll call, ruling out manual copy-paste ways to overcome this problem. I've looked at several possible approaches to attacking this issue: Manually copy endpoints and bindings from mydell app.config to target EXE or web .config file. Couples the modules, not flexible Include endpoints and bindings from mydll app.config in target .config, using configSource (see here). Also add coupling between modules Programmatically load mydll app.config, read endpoints and bindings, and instantiate Binding and EndpointAddress. Use a different tool to create local frontend for myservice I'm not sure which way to go. Option 3 sounds promising, but as it turns out it's a lot of work and will probably introduce several bugs, so it doubtfully pays off. I'm also not familiar with any tool other than the canonical svcutil.exe. Please either give pros and cons for the above alternative, provide tips for implementing any of them, or suggest other approaches. Thanks, Asaf

    Read the article

  • How to dispatch a new property value in an object to the same property of two other objects

    - by WPFadvocate
    In WPF, I've three objects exposing the same DependencyProperty (let's say it's an integer). I want all three property values to remain synchronized, i.e. that whenever the int value changes in an object, this value is propagated to the two other objects. I think of multibinding to do the job, but I don't know how to detect which object changed, thus which value should be used and propagated to the other objects. Edited: here is my tentative code for multibinding, with the false hope that it would work without additional code: // create the multibinding MultiBinding mb = new MultiBinding() { Mode = BindingMode.TwoWay, UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged }; // create individual bindings to associate object_2 and object_3 to object_1 Binding b2 = new Binding() { Source = object_2, Path = new PropertyPath("X") }; Binding b3 = new Binding() { Source = object_3, Path = new PropertyPath("X") }; // add individual bindings to multibinding mb.Bindings.Add(b2); mb.Bindings.Add(b3); // bind object_2 and _3 to object_1 BindingOperations.SetBinding(object_1, TypeObject_1.XProperty, mb); But actually, there is a runtime error, saying the binding set by the last instruction is lacking a converter. But again I don't know how to write this converter (there is nothing to convert (as this is the case in the related MS sample of code linking 3 rgb properties to a color property), only to forward the value of the property changed to the two other properties). I understand I could solve the problem by creating an X_Changed event in the 3 types and then have each object registering to the two other objects event. I don't like this "manual" way and would prefer to bind the 3 properties together.

    Read the article

  • how to store data in ram in verilog

    - by anum
    i am having a bit stream of 128 bits @ each posedge of clk,i.e.total 10 bit streams each of length 128 bits. i want to divide the 128 bit stream into 8, 8 bits n hve to store them in a ram / memory of width 8 bits. i did it by assigning 8, 8 bits to wires of size 8 bit.in this way there are 16 wires. and i am using dual port ram...wen i cal module of memory in stimulus.i don know how to give input....as i am hving 16 different wires naming from k1 to k16. **codeeee** // this is stimulus file module final_stim; reg [7:0] in,in_data; reg clk,rst_n,rd,wr,rd_data,wr_data; wire [7:0] out,out_wr, ouut; wire[7:0] d; integer i; //wire[7:0] xor_out; reg kld,f; reg [127:0]key; wire [127:0] key_expand; wire [7:0]out_data; reg [7:0] k; //wire [7:0] k1,k2,k3,k4,k5,k6,k7,k8,k9,k10,k11,k12,k13,k14,k15,k16; wire [7:0] out_data1; **//key_expand is da output which is giving 10 streams of size 128 bits.** assign k1=key_expand[127:120]; assign k2=key_expand[119:112]; assign k3=key_expand[111:104]; assign k4=key_expand[103:96]; assign k5=key_expand[95:88]; assign k6=key_expand[87:80]; assign k7=key_expand[79:72]; assign k8=key_expand[71:64]; assign k9=key_expand[63:56]; assign k10=key_expand[55:48]; assign k11=key_expand[47:40]; assign k12=key_expand[39:32]; assign k13=key_expand[31:24]; assign k14=key_expand[23:16]; assign k15=key_expand[15:8]; assign k16=key_expand[7:0]; **// then the module of memory is instanciated. //here k1 is sent as input.but i don know how to save the other values of k. //i tried to use for loop but it dint help** memory m1(clk,rst_n,rd, wr,k1,out_data1); aes_sbox b(out,d); initial begin clk=1'b1; rst_n=1'b0; #20 rst_n = 1; //rd=1'b1; wr_data=1'b1; in=8'hd4; #20 //rst_n=1'b1; in=8'h27; rd_data=1'b0; wr_data=1'b1; #20 in=8'h11; rd_data=1'b0; wr_data=1'b1; #20 in=8'hae; rd_data=1'b0; wr_data=1'b1; #20 in=8'he0; rd_data=1'b0; wr_data=1'b1; #20 in=8'hbf; rd_data=1'b0; wr_data=1'b1; #20 in=8'h98; rd_data=1'b0; wr_data=1'b1; #20 in=8'hf1; rd_data=1'b0; wr_data=1'b1; #20 in=8'hb8; rd_data=1'b0; wr_data=1'b1; #20 in=8'hb4; rd_data=1'b0; wr_data=1'b1; #20 in=8'h5d; rd_data=1'b0; wr_data=1'b1; #20 in=8'he5; rd_data=1'b0; wr_data=1'b1; #20 in=8'h1e; rd_data=1'b0; wr_data=1'b1; #20 in=8'h41; rd_data=1'b0; wr_data=1'b1; #20 in=8'h52; rd_data=1'b0; wr_data=1'b1; #20 in=8'h30; rd_data=1'b0; wr_data=1'b1; #20 wr_data=1'b0; #380 rd_data=1'b1; #320 rd_data = 1'b0; /////////////// #10 kld = 1'b1; key=128'h 2b7e151628aed2a6abf7158809cf4f3c; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b0; #10 wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 wr = 1'b0; #20 rd = 1'b1; #4880 f=1'b1; ///////////////////////////////////////////////// // out_data[i] end /*always@(*) begin while(i) mem[i]^mem1[i] ; i<=16; break; end*/ always #10 clk=~clk; always@(posedge clk) begin //$monitor($time," out_wr=%h,out_rd=%h\n ",out_wr,out); #10000 $stop; end endmodule

    Read the article

  • Spring transactions not committing

    - by Clinton Bosch
    I am struggling to get my spring managed transactions to commit, could someone please spot what I have done wrong. All my tables are mysql InnonDB tables. My RemoteServiceServlet (GWT) is as follows: public class TrainTrackServiceImpl extends RemoteServiceServlet implements TrainTrackService { @Autowired private DAO dao; @Override public void init(ServletConfig config) throws ServletException { super.init(config); WebApplicationContext ctx = WebApplicationContextUtils.getRequiredWebApplicationContext(config.getServletContext()); AutowireCapableBeanFactory beanFactory = ctx.getAutowireCapableBeanFactory(); beanFactory.autowireBean(this); } @Transactional(propagation= Propagation.REQUIRED, rollbackFor=Exception.class) public UserDTO createUser(String firstName, String lastName, String idNumber, String cellPhone, String email, int merchantId) { User user = new User(); user.setFirstName(firstName); user.setLastName(lastName); user.setIdNumber(idNumber); user.setCellphone(cellPhone); user.setEmail(email); user.setDateCreated(new Date()); Merchant merchant = (Merchant) dao.find(Merchant.class, merchantId); if (merchant != null) { user.setMerchant(merchant); } // Save the user. dao.saveOrUpdate(user); UserDTO dto = new UserDTO(); dto.id = user.getId(); dto.firstName = user.getFirstName(); dto.lastName = user.getLastName(); return dto; } The DAO is as follows: public class DAO extends HibernateDaoSupport { private String adminUsername; private String adminPassword; private String godUsername; private String godPassword; public String getAdminUsername() { return adminUsername; } public void setAdminUsername(String adminUsername) { this.adminUsername = adminUsername; } public String getAdminPassword() { return adminPassword; } public void setAdminPassword(String adminPassword) { this.adminPassword = adminPassword; } public String getGodUsername() { return godUsername; } public void setGodUsername(String godUsername) { this.godUsername = godUsername; } public String getGodPassword() { return godPassword; } public void setGodPassword(String godPassword) { this.godPassword = godPassword; } public void saveOrUpdate(ModelObject obj) { getHibernateTemplate().saveOrUpdate(obj); } And my applicationContext.xml is as follows: <context:annotation-config/> <context:component-scan base-package="za.co.xxx.traintrack.server"/> <!-- Application properties --> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <list> <value>file:${user.dir}/@propertiesFile@</value> </list> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${connection.dialect}</prop> <prop key="hibernate.connection.username">${connection.username}</prop> <prop key="hibernate.connection.password">${connection.password}</prop> <prop key="hibernate.connection.url">${connection.url}</prop> <prop key="hibernate.connection.driver_class">${connection.driver.class}</prop> <prop key="hibernate.show_sql">${show.sql}</prop> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.c3p0.min_size">5</prop> <prop key="hibernate.c3p0.max_size">20</prop> <prop key="hibernate.c3p0.timeout">300</prop> <prop key="hibernate.c3p0.max_statements">50</prop> <prop key="hibernate.c3p0.idle_test_period">60</prop> </props> </property> <property name="annotatedClasses"> <list> <value>za.co.xxx.traintrack.server.model.Answer</value> <value>za.co.xxx.traintrack.server.model.Company</value> <value>za.co.xxx.traintrack.server.model.CompanyRegion</value> <value>za.co.xxx.traintrack.server.model.Merchant</value> <value>za.co.xxx.traintrack.server.model.Module</value> <value>za.co.xxx.traintrack.server.model.Question</value> <value>za.co.xxx.traintrack.server.model.User</value> <value>za.co.xxx.traintrack.server.model.CompletedModule</value> </list> </property> </bean> <bean id="dao" class="za.co.xxx.traintrack.server.DAO"> <property name="sessionFactory" ref="sessionFactory"/> <property name="adminUsername" value="${admin.user.name}"/> <property name="adminPassword" value="${admin.user.password}"/> <property name="godUsername" value="${god.user.name}"/> <property name="godPassword" value="${god.user.password}"/> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory"> <ref local="sessionFactory"/> </property> </bean> <!-- enable the configuration of transactional behavior based on annotations --> <tx:annotation-driven transaction-manager="transactionManager"/> If I change the sessionFactory property to be autoCommit=true then my object does get persisited. <prop key="hibernate.connection.autocommit">true</prop>

    Read the article

  • During Spring unit test, data written to db but test not seeing the data

    - by richever
    I wrote a test case that extends AbstractTransactionalJUnit4SpringContextTests. The single test case I've written creates an instance of class User and attempts to write it to the database using Hibernate. The test code then uses SimpleJdbcTemplate to execute a simple select count(*) from the user table to determine if the user was persisted to the database or not. The test always fails though. I was suspect because in the Spring controller I wrote, the ability to save an instance of User to the db is successful. So I added the Rollback annotation to the unit test and sure enough, the data is written to the database since I can even see it in the appropriate table -- the transaction isn't rolled back when the test case is finished. Here's my test case: @ContextConfiguration(locations = { "classpath:context-daos.xml", "classpath:context-dataSource.xml", "classpath:context-hibernate.xml"}) public class UserDaoTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private UserDao userDao; @Test @Rollback(false) public void teseCreateUser() { try { UserModel user = randomUser(); String username = user.getUserName(); long id = userDao.create(user); String query = "select count(*) from public.usr where usr_name = '%s'"; long count = simpleJdbcTemplate.queryForLong(String.format(query, username)); Assert.assertEquals("User with username should be in the db", 1, count); } catch (Exception e) { e.printStackTrace(); Assert.assertNull("testCreateUser: " + e.getMessage()); } } } I think I was remiss by not adding the configuration files. context-hibernate.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd> <bean id="namingStrategy" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"> <property name="staticField"> <value>org.hibernate.cfg.ImprovedNamingStrategy.INSTANCE</value> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" destroy-method="destroy" scope="singleton"> <property name="namingStrategy"> <ref bean="namingStrategy"/> </property> <property name="dataSource" ref="dataSource"/> <property name="mappingResources"> <list> <value>com/company/model/usr.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <prop key="hibernate.query.substitutions">yes 'Y', no 'N'</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.use_minimal_puts">false</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.current_session_context_class">thread</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> <property name="nestedTransactionAllowed" value="false" /> </bean> <bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor"> <property name="transactionManager"> <ref local="transactionManager"/> </property> <property name="transactionAttributes"> <props> <prop key="create">PROPAGATION_REQUIRED</prop> <prop key="delete">PROPAGATION_REQUIRED</prop> <prop key="update">PROPAGATION_REQUIRED</prop> <prop key="*">PROPAGATION_SUPPORTS,readOnly</prop> </props> </property> </bean> </beans> context-dataSource.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="org.postgresql.Driver" /> <property name="jdbcUrl" value="jdbc\:postgresql\://localhost:5432/company_dev" /> <property name="user" value="postgres" /> <property name="password" value="postgres" /> </bean> </beans> context-daos.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="extendedFinderNamingStrategy" class="com.company.dao.finder.impl.ExtendedFinderNamingStrategy"/> <bean id="finderIntroductionAdvisor" class="com.company.dao.finder.impl.FinderIntroductionAdvisor"/> <bean id="abstractDaoTarget" class="com.company.dao.impl.GenericDaoHibernateImpl" abstract="true" depends-on="sessionFactory"> <property name="sessionFactory"> <ref bean="sessionFactory"/> </property> <property name="namingStrategy"> <ref bean="extendedFinderNamingStrategy"/> </property> </bean> <bean id="abstractDao" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true"> <property name="interceptorNames"> <list> <value>transactionInterceptor</value> <value>finderIntroductionAdvisor</value> </list> </property> </bean> <bean id="userDao" parent="abstractDao"> <property name="proxyInterfaces"> <value>com.company.dao.UserDao</value> </property> <property name="target"> <bean parent="abstractDaoTarget"> <constructor-arg> <value>com.company.model.UserModel</value> </constructor-arg> </bean> </property> </bean> </beans> Some of this I've inherited from someone else. I wouldn't have used the proxying that is going on here because I'm not sure it's needed but this is what I'm working with. Any help much appreciated.

    Read the article

  • setting up bind to work with nsupdate (SERVFAIL)

    - by funny_ha_ha
    I'm trying to update my DNS-Server dynamically using nsupdate. Prerequisite I'm using Debian 6 on my DNS-Server and Debian 4 on my client. I created a public/private key pair using: dnssec-keygen -C -a HMAC-MD5 -b 512 -n USER sub.example.com. I then edited my named.conf.local to contain my public key and the new zone i wish to update. It now looks like this (note: I also tried allow-update { any; }; without success): zone "example.com" { type master; file "/etc/bind/primary/example.com"; notify yes; allow-update { none; }; allow-query { any; }; }; zone "sub.example.com" { type master; file "/etc/bind/primary/sub.example.com"; notify yes; allow-update { key "sub.example.com."; }; allow-query { any; }; }; key sub.example.com. { algorithm HMAC-MD5; secret "xxxx xxxx"; }; Next, I copied the private key file (key.private) to another server I want to update the zone from. I also created a textfile (update) on this server which contained the update information (note: I tried toying around with this stuff too. no success): server example.com zone sub.example.com update add sub.example.com. 86400 A 10.10.10.1 show send Now I'm trying to update the zone using: nsupdate -k key.private -v update The Problem Said command gives me the following output: Outgoing update query: ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 0 ;; flags: ; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0 ;; ZONE SECTION: ;sub.example.com. IN SOA ;; UPDATE SECTION: sub.example.com. 86400 IN A 10.10.10.1 update failed: SERVFAIL named debug Level 3 gives me the following information when I issue the nsupdate command on the remote server (note: I obfuscated the client IP): 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: new TCP connection 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: replace 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: createclients 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: recycle 06-Aug-2012 14:51:33.978 client @0x2ada475f1120: accept 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: TCP request 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: request has valid signature 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: recursion not available 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: update 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: send 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: sendto 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: senddone 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: request failed: end of file 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: closetcp But it doesn't do anything. The zone isn't updated, nor does my nsupdate change anything. I'm not sure if the file /etc/bind/primary/sub.example.com should exist prior to the first update or not. I tried it without the file, with an empty file and with a pre-configured zone file. Without success. The sparse information I found on the net pointed me towards file and folder permissions regarding the bind working directory, so I changed the permissions of both /etc/bind and /var/cache/bind (which is the home dir of my "bind" user). I'm not a 100% sure if the permissions are correct.. but it looks good to me: ls -lah /var/cache/bind/ total 224K drwxrwxr-x 2 bind bind 4.0K Aug 6 03:13 . drwxr-xr-x 12 root root 4.0K Jul 21 11:27 .. -rw-r--r-- 1 bind bind 211K Aug 6 03:21 named.run ls -lah /etc/bind/ total 72K drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 . drwxr-xr-x 87 root root 4.0K Jul 30 01:24 .. -rw------- 1 bind bind 125 Aug 6 02:54 key.public -rw------- 1 bind bind 156 Aug 6 02:54 key.private -rw-r--r-- 1 bind bind 2.5K Aug 6 03:07 bind.keys -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.0 -rw-r--r-- 1 bind bind 271 Aug 6 03:07 db.127 -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.255 -rw-r--r-- 1 bind bind 353 Aug 6 03:07 db.empty -rw-r--r-- 1 bind bind 270 Aug 6 03:07 db.local -rw-r--r-- 1 bind bind 3.0K Aug 6 03:07 db.root -rw-r--r-- 1 bind bind 493 Aug 6 03:32 named.conf -rw-r--r-- 1 bind bind 490 Aug 6 03:07 named.conf.default-zones -rw-r--r-- 1 bind bind 1.2K Aug 6 14:18 named.conf.local -rw-r--r-- 1 bind bind 666 Jul 29 22:51 named.conf.options drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 primary/ -rw-r----- 1 root bind 77 Mar 19 02:57 rndc.key -rw-r--r-- 1 bind bind 1.3K Aug 6 03:07 zones.rfc1918 ls -lah /etc/bind/primary/ total 20K drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 . drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 .. -rw-r--r-- 1 bind bind 356 Jul 30 00:45 example.com

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to Disable Caps Lock on Mac OS X

    - by The Geek
    Unless you’re working in the accounting department, you really don’t need the Caps Lock key—and let’s face it: you’re probably not going to be using a Mac if you do work in accounting. Here’s how to disable the Caps Lock key, or remap it to something else. If you’re using Windows instead, you can follow our guide on how to disable Caps Lock in Windows using a registry hack, or you can map any key to any key if you really want to Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation

    Read the article

  • Heroku SSL "certificate is only valid for the following names: *.herokuapp.com, herokuapp.com"

    - by benedict_w
    I'm trying to setup a Geotrust SSL certificate for my Heroku app using the SSL Endpoint addon and the instructions at https://devcenter.heroku.com/articles/ssl-endpoint. I generated my public key from my private key using: openssl rsa -in server.orig.key -out server.key and added to the heroku certs: heroku certs:add server.crt server.key Everything seemed to be fine. heroku certs listed the corrected information only with Trusted = false for my certificate. If I go to https://tokyo-2121.herokussl.com the browser says: You attempted to reach tokyo-2121.herokussl.com, but instead you actually reached a server identifying itself as www.mydomain.com. As expected with the certificate apparently identifying the correct domain, but When I set up the CNAME to the given tokyo-2121.herokussl.com and visit my subdomain the browser says: www.mydomain.com uses an invalid security certificate. The certificate is only valid for the following names: *.herokuapp.com , herokuapp.com If I run curl -kv https://www.mydomain.com I get: subjectAltName does not match www.mydomain.com

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >