Search Results

Search found 6628 results on 266 pages for 'foreign keys'.

Page 255/266 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Help with refactoring PHP code

    - by Richard Knop
    I had some troubles implementing Lawler's algorithm but thanks to SO and a bounty of 200 reputation I finally managed to write a working implementation: http://stackoverflow.com/questions/2466928/lawlers-algorithm-implementation-assistance I feel like I'm using too many variables and loops there though so I am trying to refactor the code. It should be simpler and shorter yet remain readable. Does it make sense to make a class for this? Any advice or even help with refactoring this piece of code is welcomed: <?php /* * @name Lawler's algorithm PHP implementation * @desc This algorithm calculates an optimal schedule of jobs to be * processed on a single machine (in reversed order) while taking * into consideration any precedence constraints. * @author Richard Knop * */ $jobs = array(1 => array('processingTime' => 2, 'dueDate' => 3), 2 => array('processingTime' => 3, 'dueDate' => 15), 3 => array('processingTime' => 4, 'dueDate' => 9), 4 => array('processingTime' => 3, 'dueDate' => 16), 5 => array('processingTime' => 5, 'dueDate' => 12), 6 => array('processingTime' => 7, 'dueDate' => 20), 7 => array('processingTime' => 5, 'dueDate' => 27), 8 => array('processingTime' => 6, 'dueDate' => 40), 9 => array('processingTime' => 3, 'dueDate' => 10)); // precedence constrainst, i.e job 2 must be completed before job 5 etc $successors = array(2=>5, 7=>9); $n = count($jobs); $optimalSchedule = array(); for ($i = $n; $i >= 1; $i--) { // jobs not required to precede any other job $arr = array(); foreach ($jobs as $k => $v) { if (false === array_key_exists($k, $successors)) { $arr[] = $k; } } // calculate total processing time $totalProcessingTime = 0; foreach ($jobs as $k => $v) { if (true === array_key_exists($k, $arr)) { $totalProcessingTime += $v['processingTime']; } } // find the job that will go to the end of the optimal schedule array $min = null; $x = 0; $lastKey = null; foreach($arr as $k) { $x = $totalProcessingTime - $jobs[$k]['dueDate']; if (null === $min || $x < $min) { $min = $x; $lastKey = $k; } } // add the job to the optimal schedule array $optimalSchedule[$lastKey] = $jobs[$lastKey]; // remove job from the jobs array unset($jobs[$lastKey]); // remove precedence constraint from the successors array if needed if (true === in_array($lastKey, $successors)) { foreach ($successors as $k => $v) { if ($lastKey === $v) { unset($successors[$k]); } } } } // reverse the optimal schedule array and preserve keys $optimalSchedule = array_reverse($optimalSchedule, true); // add tardiness to the array $i = 0; foreach ($optimalSchedule as $k => $v) { $optimalSchedule[$k]['tardiness'] = 0; $j = 0; foreach ($optimalSchedule as $k2 => $v2) { if ($j <= $i) { $optimalSchedule[$k]['tardiness'] += $v2['processingTime']; } $j++; } $i++; } echo '<pre>'; print_r($optimalSchedule); echo '</pre>';

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

  • extensible database design: automatic ALTER TABLE or serialize() field BLOB ?

    - by mario
    I want an adaptable database scheme. But still use a simple table data gateway in my application, where I just pass an $data[] array for storing. The basic columns are settled in the initial table scheme. There will however arise a couple of meta fields later (ca 10-20). I want some flexibility there and not adapt the database manually each time, or -worse- change the application logic just because of new fields. So now there are two options which seem workable yet not overkill. But I'm not sure about the scalability or database drawbacks. (1) Automatic ALTER TABLE. Whenever the $data array is to be saved, the keys are compared against the current database columns. New columns get defined before the $data is INSERTed into the table. Actually seems simple enough in test code: function save($data, $table="forum") { // columns if ($new_fields = array_diff(array_keys($data), known_fields($table))) { extend_schema($table, $new_fields, $data); } // save $columns = implode("`, `", array_keys($data)); $qm = str_repeat(",?", count(array_keys($data)) - 1); echo ("INSERT INTO `$table` (`$columns`) VALUES (?$qm);"); function known_fields($table) { return unserialize(@file_get_contents("db:$table")) ?: array("id"); function extend_schema($table, $new_fields, $data) { foreach ($new_fields as $field) { echo("ALTER TABLE `$table` ADD COLUMN `$field` VARCHAR;"); Since it is mostly meta information fields, adding them just as VARCHAR seems sufficient. Nobody will query by them anyway. So the database is really just used as storage here. However, while I might want to add a lot of new $data fields on the go, they will not always be populated. (2) serialize() fields into BLOB. Any new/extraneous meta fields could be opaque to the database. Simply sorting out the virtual fields from the real database columns is simple. And the meta fields can just be serialize()d into a blob/text field then: function ext_save($data, $table="forum") { $db_fields = array("id", "content", "flags", "ext"); // disjoin foreach (array_diff(array_keys($data),$db_fields) as $key) { $data["ext"][$key] = $data[$key]; unset($data[$key]); } $data["ext"] = serialize($data["ext"]); Unserializing and unpacking this 'ext' column on read queries is a minor overhead. The advantage is that there won't be any sparsely filled columns in the database, so I guess it's compacter and faster than the AUTO ALTER TABLE approach. Of course, this method prevents ever using one of the new fields in a WHERE or GROUP BY clause. But I think none of the possible meta fields (user_agent, author_ip, author_img, votes, hits, last_modified, ..) would/should ever be used there anyway. So I currently prefer the 'ext' blob approach, even if it's a one-way ticket. How are such columns called usually? (looking for examples/doc) Would you use XML serialization for (very theoretical) in-database queries? The adapting table scheme seems a "cleaner" interface, even if most columns might remain empty then. How does that impact speed? How many such sparse VARCHAR fields can MySQL/innodb stomach? But most importantly: Is there any standard implementation for this? A pseudo ORM with automatic ALTER TABLE tricks? Storing a simple column list seems workable, but something like pdo::getColumnMeta would be more robust.

    Read the article

  • Scala: Recursively building all pathes in a graph?

    - by DarqMoth
    Trying to build all existing paths for an udirected graph defined as a map of edges using the following algorithm: Start: with a given vertice A Find an edge (X.A, X.B) or (X.B, X.A), add this edge to path Find all edges Ys fpr which either (Y.C, Y.B) or (Y.B, Y.C) is true For each Ys: A=B, goto Start Providing edges are defined as the following map, where keys are tuples consisting of two vertices: val edges = Map( ("n1", "n2") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n1") -> "n5n1", ("n5", "n4") -> "n5n4") As an output I need to get a list of ALL pathes where each path is a list of adjecent edges like this: val allPaths = List( List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) //... //... more pathes to go } Note: Edge XY = (x,y) - "xy" and YX = (y,x) - "yx" exist as one instance only, either as XY or YX So far I have managed to implement code that duplicates edges in the path, which is wrong and I can not find the error: object Graph2 { type Vertice = String type Edge = ((String, String), String) type Path = List[((String, String), String)] val edges = Map( //(("v1", "v2") , "v1v2"), (("v1", "v3") , "v1v3"), (("v3", "v4") , "v3v4") //(("v5", "v1") , "v5v1"), //(("v5", "v4") , "v5v4") ) def main(args: Array[String]): Unit = { val processedVerticies: Map[Vertice, Vertice] = Map() val processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)] = Map() val path: Path = List() println(buildPath(path, "v1", processedVerticies, processedEdges)) } /** * Builds path from connected by edges vertices starting from given vertice * Input: map of edges * Output: list of connected edges like: List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) */ def buildPath(path: Path, vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)]): List[Path] = { println("V: " + vertice + " VM: " + processedVerticies + " EM: " + processedEdges) if (!processedVerticies.contains(vertice)) { val edges = children(vertice) println("Edges: " + edges) val x = edges.map(edge => { if (!processedEdges.contains(edge._1)) { addToPath(vertice, processedVerticies.++(Map(vertice -> vertice)), processedEdges, path, edge) } else { println("ALready have edge: "+edge+" Return path:"+path) path } }) val y = x.toList y } else { List(path) } } def addToPath( vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)], path: Path, edge: Edge): Path = { val newPath: Path = path ::: List(edge) val key = edge._1 val nextVertice = neighbor(vertice, key) val x = buildPath (newPath, nextVertice, processedVerticies, processedEdges ++ (Map((vertice, nextVertice) -> (vertice, nextVertice))) ).flatten // need define buidPath type x } def children(vertice: Vertice) = { edges.filter(p => (p._1)._1 == vertice || (p._1)._2 == vertice) } def containsPair(x: (Vertice, Vertice), m: Map[(Vertice, Vertice), (Vertice, Vertice)]): Boolean = { m.contains((x._1, x._2)) || m.contains((x._2, x._1)) } def neighbor(vertice: String, key: (String, String)): String = key match { case (`vertice`, x) => x case (x, `vertice`) => x } } Running this results in: List(List(((v1,v3),v1v3), ((v1,v3),v1v3), ((v3,v4),v3v4))) Why is that?

    Read the article

  • i need some help with my vb.net codes..plzz

    - by akmalizhar
    currently i need to develop an application that can exctract information from few website.. this is what i have done up until now.. Imports System Imports System.Text.RegularExpressions Imports System.IO Imports System.Net Imports System.Web Imports System.Data.SqlClient Imports System.Threading Imports System.Data.DataSet Imports System.Data.OleDb Module module1 Dim url As String Dim hotelName As String = "" Sub Main() Dim url As String = "" Console.Write("enter url: ") url = Console.ReadLine() extractor(url) End Sub Public Sub extractor(ByVal url As String) Dim strConn As String = "Data Source = localhost; Initial Catalog = knowledgeBase; Integrated Security = True; Connection Timeout = 0;" Dim conn As SqlConnection = New SqlConnection(strConn) conn.Open() Dim strSQL1 As String Dim matchStn1 As String = "" Dim matchstn2 As String = "" Dim matchstn3 As String = "" Dim matchstn4 As String = "" Dim matchstn5 As String = "" Dim matchstn6 As String = "" Dim matchstn7 As String = "" Dim matchstn8 As String = "" Dim matchstn9 As String = "" Dim matchstn10 As String = "" Dim objRequest As WebRequest = HttpWebRequest.Create(url) Dim objResponse As WebResponse = objRequest.GetResponse() Dim objStreamReader As New StreamReader(objResponse.GetResponseStream()) Dim strpage As String = objStreamReader.ReadToEnd Dim RegExStr As String = "<[^>]*>" Dim R As New Regex(RegExStr) Dim sourcestring As String = strpage Dim re As Regex = New Regex("<h2 class=""name hotel""[^>]*>[\s\S]+?</h2>") Dim mc As MatchCollection = re.Matches(sourcestring) Dim mIdx As Integer = 0 For Each m As Match In mc For groupIdx As Integer = 0 To m.Groups.Count - 1 matchStn1 = m.Groups(groupIdx).Value matchStn1 = R.Replace(matchStn1, " ") matchStn1 = matchStn1.Trim() Next mIdx = mIdx + 1 Next Dim re9 As Regex = New Regex("<li class=""cuisine""[^>]*>[^>]+</li>") Dim mc9 As MatchCollection = re9.Matches(sourcestring) Dim mIdx9 As Integer = 0 For Each m As Match In mc9 For groupIdx As Integer = 0 To m.Groups.Count - 1 matchstn9 = m.Groups(groupIdx).Value matchstn9 = R.Replace(matchstn9, " ") matchstn9 = matchstn9.Trim() Next mIdx = mIdx + 1 Next Dim re2 As Regex = New Regex("<span class=""street-address""[^>]*>[^>]+</span>") Dim mc2 As MatchCollection = re2.Matches(sourcestring) Dim mIdx2 As Integer = 0 For Each m As Match In mc2 For groupIdx As Integer = 0 To m.Groups.Count - 1 matchstn2 = m.Groups(groupIdx).Value matchstn2 = R.Replace(matchstn2, " ") matchstn2 = matchstn2.Trim() Next mIdx2 = mIdx2 + 1 Next Dim re3 As Regex = New Regex("<span class=""locality""[^>]*>[\s\S]+?</span>") Dim mc3 As MatchCollection = re3.Matches(sourcestring) Dim mIdx3 As Integer = 0 For Each m As Match In mc3 For groupIdx As Integer = 0 To m.Groups.Count - 1 matchstn3 = m.Groups(groupIdx).Value matchstn3 = R.Replace(matchstn3, " ") matchstn3 = matchstn3.Trim() Next mIdx3 = mIdx3 + 1 Next Dim re4 As Regex = New Regex("<span property=""v:postal-code""[^>]*>[\s\S]+?</span>") Dim mc4 As MatchCollection = re4.Matches(sourcestring) Dim mIdx4 As Integer = 0 For Each m As Match In mc4 For groupIdx As Integer = 0 To m.Groups.Count - 1 matchstn4 = m.Groups(groupIdx).Value matchstn4 = R.Replace(matchstn4, " ") matchstn4 = matchstn4.Trim() Next mIdx4 = mIdx4 + 1 Next Dim re5 As Regex = New Regex("<span class=""country-name""[^>]*>[\s\S]+?</span>") Dim mc5 As MatchCollection = re5.Matches(sourcestring) Dim mIdx5 As Integer = 0 For Each m As Match In mc5 For groupIdx As Integer = 0 To m.Groups.Count - 1 matchstn5 = m.Groups(groupIdx).Value matchstn5 = R.Replace(matchstn5, " ") matchstn5 = matchstn5.Trim() Next mIdx5 = mIdx5 + 1 Next Dim re10 As Regex = New Regex("<address class=""adr""[^>]*>[\s\S]+?</address>") Dim mc10 As MatchCollection = re10.Matches(sourcestring) Dim mIdx10 As Integer = 0 For Each m As Match In mc10 For groupIdx As Integer = 0 To m.Groups.Count - 1 matchstn10 = m.Groups(groupIdx).Value matchstn10 = R.Replace(matchstn10, " ") matchstn10 = matchstn10.Trim() strSQL1 = "insert into infoRestaurant (nameRestaurant, cuisine, streetAddress, locality, postalCode, countryName, addressFull, tel, attractionType) values (N" & _ FormatSqlParam(matchStn1) & ",N" & _ FormatSqlParam(matchstn9) & ",N" & _ FormatSqlParam(matchstn2) & ",N" & _ FormatSqlParam(matchstn3) & ",N" & _ FormatSqlParam(matchstn4) & ",N" & _ FormatSqlParam(matchstn5) & ",N" & _ FormatSqlParam(matchstn10) & ",N" & _ FormatSqlParam(matchstn6) & ",N" & _ FormatSqlParam(matchstn7) & ")" Dim objCommand1 As New SqlCommand(strSQL1, conn) objCommand1.ExecuteNonQuery() Next mIdx4 = mIdx4 + 1 Next Dim re6 As Regex = New Regex("<span class=""tel""[^>]*>[\s\S]+?</span>") Dim mc6 As MatchCollection = re6.Matches(sourcestring) Dim mIdx6 As Integer = 0 For Each m As Match In mc6 For groupIdx As Integer = 0 To m.Groups.Count - 1 matchstn6 = m.Groups(groupIdx).Value matchstn6 = R.Replace(matchstn6, " ") matchstn6 = matchstn6.Trim() Next mIdx6 = mIdx6 + 1 Next Dim re7 As Regex = New Regex("<div><b>Attraction type:[^>]*>[\s\S]+?</div>") Dim mc7 As MatchCollection = re7.Matches(sourcestring) Dim mIdx7 As Integer = 0 For Each m As Match In mc7 For groupIdx As Integer = 0 To m.Groups.Count - 1 matchstn7 = m.Groups(groupIdx).Value matchstn7 = R.Replace(matchstn7, " ") matchstn7 = matchstn7.Trim() Next mIdx7 = mIdx7 + 1 Next Dim re8 As Regex = New Regex("(?=<p id).*(?<=</p>)") Dim mc8 As MatchCollection = re8.Matches(sourcestring) Dim mIdx8 As Integer = 0 For Each m As Match In mc8 For groupIdx As Integer = 0 To m.Groups.Count - 1 matchstn8 = m.Groups(groupIdx).Value matchstn8 = R.Replace(matchstn8, " ") matchstn8 = matchstn8.Trim() Dim strSQL2 As String = "insert into feedBackRestaurant (feedBackView) values(N" + FormatSqlParam(matchstn8) + ")" Dim objCommand2 As New SqlCommand(strSQL2, conn) objCommand2.ExecuteNonQuery() Next mIdx8 = mIdx8 + 1 Next objStreamReader.Close() conn.Close() End Sub Public Function FormatSqlParam(ByVal strParam As String) As String Dim newParamFormat As String If strParam = String.Empty Then newParamFormat = "'" & "NA" & "'" Else newParamFormat = strParam.Trim() newParamFormat = "'" & newParamFormat.Replace("'", "''") & "'" End If Return newParamFormat End Function End Module ---problems-- problem that i face are 1. the database foreign key is not working here..someone told me that need some codes to be added..but i dunno how. 2. the data repeats as i run the application. i guest it require update database function.but i hv no idea how. 3. i have to add in multithreading function as well..and last, how to make my application is flexible eventhough the HTML code changes..can anyone help me??plzzz website that i need to extract is http://www.tripadvisor.com/Tourism-g293951-Malaysia-Vacations.html i need the information about hotel, restaurant and attraction place..plzz..i need some help here..

    Read the article

  • Problem with the output of Jquery function .offset in IE

    - by vandalk
    Hello! I'm new to jquery and javascript, and to web site developing overall, and I'm having a problem with the .offset function. I have the following code working fine on chrome and FF but not working on IE: $(document).keydown(function(k){ var keycode=k.which; var posk=$('html').offset(); var centeryk=screen.availHeight*0.4; var centerxk=screen.availWidth*0.4; $("span").text(k.which+","+posk.top+","+posk.left); if (keycode==37){ k.preventDefault(); $("html,body").stop().animate({scrollLeft:-1*posk.left-centerxk}) }; if (keycode==38){ k.preventDefault(); $("html,body").stop().animate({scrollTop:-1*posk.top-centeryk}) }; if (keycode==39){ k.preventDefault(); $("html,body").stop().animate({scrollLeft:-1*posk.left+centerxk}) }; if (keycode==40){ k.preventDefault(); $("html,body").stop().animate({scrollTop:-1*posk.top+centeryk}) }; }); hat I want it to do is to scroll the window a set percentage using the arrow keys, so my thought was to find the current coordinates of the top left corner of the document and add a percentage relative to the user screen to it and animate the scroll so that the content don't jump and the user looses focus from where he was. The $("span").text are just so I know what's happening and will be turned into comments when the code is complete. So here is what happens, on Chrome and Firefox the output of the $("span").text for the position variables is correct, starting at 0,0 and always showing how much of the content was scrolled in coordinates, but on IE it starts on -2,-2 and never gets out of it, even if I manually scroll the window until the end of it and try using the right arrow key it will still return the initial value of -2,-2 and scroll back to the beggining. I tried substituting the offset for document.body.scrollLetf and scrollTop but the result is the same, only this time the coordinates are 0,0. Am I doing something wrong? Or is this some IE bug? Is there a way around it or some other function I can use and achieve the same results? On another note, I did other two navigating options for the user in this section of the site, one is to click and drag anywhere on the screen to move it: $("html").mousedown(function(e) { var initx=e.pageX var inity=e.pageY $(document).mousemove(function(n) { var x_inc= initx-n.pageX; var y_inc= inity-n.pageY; window.scrollBy(x_inc*0.7,y_inc*0.7); initx=n.pageX; inity=n.pageY //$("span").text(initx+ "," +inity+ "," +x_inc+ "," +y_inc+ "," +e.pageX+ "," +e.pageY+ "," +n.pageX+ "," +n.pageY); // cancel out any text selections document.body.focus(); // prevent text selection in IE document.onselectstart = function () { return false; }; // prevent IE from trying to drag an image document.ondragstart = function() { return false; }; // prevent text selection (except IE) return false; }); }); $("html").mouseup(function() { $(document).unbind('mousemove'); }); The only part of this code I didn't write was the preventing text selection lines, these ones I found in a tutorial about clicking and draging objects, anyway, this code works fine on Chrome, FireFox and IE, though on Firefox and IE it's more often to happen some moviment glitches while you drag, sometimes it seems the "scrolling" is a litlle jagged, it's only a visual thing and not that much significant but if there's a way to prevent it I would like to know.

    Read the article

  • Using VBA / Macro to highlight changes in excel

    - by Zaj
    I have a spread sheet that I send out to various locations to have information on it updated and then sent back to me. However, I had to put validation and lock the cells to force users to input accurate information. Then I can to use VBA to disable the work around of cut copy and paste functions. And additionally I inserted a VBA function to force users to open the excel file in Macros. Now I'm trying to track the changes so that I know what was updated when I recieve the sheet back. However everytime i do this I get an error when someone savesthe document and randomly it will lock me out of the document completely. I have my code pasted below, can some one help me create code in the VBA forum to highlight changes instead of through excel's share/track changes option? ThisWorkbook (Code): Option Explicit Const WelcomePage = "Macros" Private Sub Workbook_BeforeClose(Cancel As Boolean) Call ToggleCutCopyAndPaste(True) 'Turn off events to prevent unwanted loops Application.EnableEvents = False 'Evaluate if workbook is saved and emulate default propmts With ThisWorkbook If Not .Saved Then Select Case MsgBox("Do you want to save the changes you made to '" & .Name & "'?", _ vbYesNoCancel + vbExclamation) Case Is = vbYes 'Call customized save routine Call CustomSave Case Is = vbNo 'Do not save Case Is = vbCancel 'Set up procedure to cancel close Cancel = True End Select End If 'If Cancel was clicked, turn events back on and cancel close, 'otherwise close the workbook without saving further changes If Not Cancel = True Then .Saved = True Application.EnableEvents = True .Close savechanges:=False Else Application.EnableEvents = True End If End With End Sub Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean) 'Turn off events to prevent unwanted loops Application.EnableEvents = False 'Call customized save routine and set workbook's saved property to true '(To cancel regular saving) Call CustomSave(SaveAsUI) Cancel = True 'Turn events back on an set saved property to true Application.EnableEvents = True ThisWorkbook.Saved = True End Sub Private Sub Workbook_Open() Call ToggleCutCopyAndPaste(False) 'Unhide all worksheets Application.ScreenUpdating = False Call ShowAllSheets Application.ScreenUpdating = True End Sub Private Sub CustomSave(Optional SaveAs As Boolean) Dim ws As Worksheet, aWs As Worksheet, newFname As String 'Turn off screen flashing Application.ScreenUpdating = False 'Record active worksheet Set aWs = ActiveSheet 'Hide all sheets Call HideAllSheets 'Save workbook directly or prompt for saveas filename If SaveAs = True Then newFname = Application.GetSaveAsFilename( _ fileFilter:="Excel Files (*.xls), *.xls") If Not newFname = "False" Then ThisWorkbook.SaveAs newFname Else ThisWorkbook.Save End If 'Restore file to where user was Call ShowAllSheets aWs.Activate 'Restore screen updates Application.ScreenUpdating = True End Sub Private Sub HideAllSheets() 'Hide all worksheets except the macro welcome page Dim ws As Worksheet Worksheets(WelcomePage).Visible = xlSheetVisible For Each ws In ThisWorkbook.Worksheets If Not ws.Name = WelcomePage Then ws.Visible = xlSheetVeryHidden Next ws Worksheets(WelcomePage).Activate End Sub Private Sub ShowAllSheets() 'Show all worksheets except the macro welcome page Dim ws As Worksheet For Each ws In ThisWorkbook.Worksheets If Not ws.Name = WelcomePage Then ws.Visible = xlSheetVisible Next ws Worksheets(WelcomePage).Visible = xlSheetVeryHidden End Sub Private Sub Workbook_Activate() Call ToggleCutCopyAndPaste(False) End Sub Private Sub Workbook_Deactivate() Call ToggleCutCopyAndPaste(True) End Sub This is in my ModuleCode: Option Explicit Sub ToggleCutCopyAndPaste(Allow As Boolean) 'Activate/deactivate cut, copy, paste and pastespecial menu items Call EnableMenuItem(21, Allow) ' cut Call EnableMenuItem(19, Allow) ' copy Call EnableMenuItem(22, Allow) ' paste Call EnableMenuItem(755, Allow) ' pastespecial 'Activate/deactivate drag and drop ability Application.CellDragAndDrop = Allow 'Activate/deactivate cut, copy, paste and pastespecial shortcut keys With Application Select Case Allow Case Is = False .OnKey "^c", "CutCopyPasteDisabled" .OnKey "^v", "CutCopyPasteDisabled" .OnKey "^x", "CutCopyPasteDisabled" .OnKey "+{DEL}", "CutCopyPasteDisabled" .OnKey "^{INSERT}", "CutCopyPasteDisabled" Case Is = True .OnKey "^c" .OnKey "^v" .OnKey "^x" .OnKey "+{DEL}" .OnKey "^{INSERT}" End Select End With End Sub Sub EnableMenuItem(ctlId As Integer, Enabled As Boolean) 'Activate/Deactivate specific menu item Dim cBar As CommandBar Dim cBarCtrl As CommandBarControl For Each cBar In Application.CommandBars If cBar.Name <> "Clipboard" Then Set cBarCtrl = cBar.FindControl(ID:=ctlId, recursive:=True) If Not cBarCtrl Is Nothing Then cBarCtrl.Enabled = Enabled End If Next End Sub Sub CutCopyPasteDisabled() 'Inform user that the functions have been disabled MsgBox " Cutting, copying and pasting have been disabled in this workbook. Please hard key in data. " End Sub

    Read the article

  • many-to-many-to-many, incl alignment of data from diff sources

    - by JefeCoon
    Re-factoring dbase to support many:many:many. At the second and third levels we need to preserve end-user 'mapping' or aligning of data from different sources, e.g. Order 17 FirstpartyOrderID => aha LineItem_for_BigShinyThingy => AA-1 # maps to 77-a LineItem_for_BigShinyThingy => AA-2 # maps to 77-b, 77-c LineItem_for_LittleWidget => AA-x # maps to 77-zulu, 77-alpha, 99-foxtrot LineItem_for_LittleWidget => AA-y # maps to 77-zulu, 99-foxtrot LineItem_for_LittleWidget => AA-z # maps to 77-alpha ThirdpartyOrderID => foo LineItem_for_BigShinyThingy => 77-a LineItem_for_BigShinyThingy => 77-b LineItem_for_BigShinyThingy => 77-c LineItem_for_LittleWidget => 77-zulu LineItem_for_LittleWidget => 77-alpha ThirdpartyOrderID => bar LineItem_for_LittleWidget => 99-foxtrot Each LineItem has daily datapoints reported from its own source (Firstparty|Thirdparty). In our UI & app we provide tools to align these, then we'd like to save them into the cleanest possible schema for querying, enabling us to diff the reported daily datapoints, and perform other daily calculations (which we'll store in the dbase also, fortunately that should be cake once we've nailed this). We need to map related [firstparty|thirdparty]line_items which have their own respective datapoints. We'll be using the association to pull each line_items collection of datapoints for summary and discrepancy calculations. I'm considering two options, std has_many,through x2 --or-- possibly (scary) ubermasterjoin table OptionA: order<<-->> order_join_table[id,order_id,firstparty_order_id,thirdparty_order_id] <<-->>line_item order_join_table[firstparty_order_id]-->raw_order[id] order_join_table[thirdparty_order_id]-->raw_order[id] raw_order-->raw_line_items[raw_order_id] line_item<<-->> line_item_join[id,LI_join_id,firstparty_LI,thirdparty_LI <<-->>raw_line_items line_item_join[firstparty_LI]-->raw_line_item[id] line_item_join[thirdparty_LI]-->raw_line_item[id] raw_line_item<<-->>datapoints = we rely upon join to store all mappings of first|third orders & line_items = keys to raw_* enable lookup of these order & line_item details = concerns about circular references and/or lack of correct mapping logic, e.g order--line_item--raw_line_items vs. order--raw_order--raw_line_items OptionB: order<<-->> join_master[id,order_id,FP_order_id,TP_order_id,FP_line_item_id,TP_line_item_id] join_master[FP_order_id & TP_order_id]-->raw_order[id] join_master[FP_line_item_id & TP_line_item_id]-->raw_line_item[id] = every combo of FP_line_item + TP_line_item writes a record into the join_master table = "theoretically" queries easy/fast/flexible/sexy At long last, my questions: a) any learnings from painful firsthand experience about how best to implement/tune/optimize many-to-many-to-many relationships b) in rails? c) any painful gotchas (circular references, slow queries, spaghetti-monsters) to watch out for? d) any joy & goodness in Rails3 that makes this magically easy & joyful? e) anyone written the "how to do many-to-many-to-many schema in Rails and make it fast & sexy?" tutorial that I somehow haven't found? If not, I'll follow up with our learnings in the hope it's helpful.. Thanks in advance- --Jeff

    Read the article

  • Create dropdown from multi array + PHP class

    - by chris
    Hi there, Well, I am writing class that creates a DOB selection dropdown. I am having figure out the dropdown(), it seems working but not exactly. Code just creates one drop down, and under this dropdown all day, month and year data are in one selection. like: <label> <sup>*</sup>DOB</label> <select name="form_bod_year"> <option value=""/> <option selected="" value="0">1</option> <option value="1">2</option> <option value="2">3</option> <option value="3">4</option> <option value="4">5</option> <option value="5">6</option> .. <option value="29">30</option> <option value="30">31</option> <option value="1">January</option> <option value="2">February</option> .. <option value="11">November</option> <option value="12">December</option> <option selected="" value="0">1910</option> <option value="1">1911</option> .. <option value="98">2008</option> <option value="99">2009</option> <option value="100">2010</option> </select> Here is my code, I wonder that why all datas are in one selection. It has to be tree selction - Day:Month:Year. //dropdown connector class DropDownConnector { var $dropDownsDatas; var $field_label; var $field_name; var $locale; function __construct($dropDownsDatas, $field_label, $field_name, $locale) { $this-dropDownsDatas = $dropDownsDatas; $this-field_label = $field_label; $this-field_name = $field_name; $this-locale = $locale; } function getValue(){ return $_POST[$this-field_name]; } function dropdown(){ $selectedVal = $this-getValue($this-field_name); foreach($this-dropDownsDatas as $keys=$values){ foreach ($values as $key=$value){ $selected = ($key == $selectedVal ? "selected" : "" ); $options .= sprintf('%s',$key,$value); }; }; return $select_start = "$this-field_desc".$options.""; } function getLabel(){ $non_req = $this-getNotRequiredData(); $req = in_array($this-field_name, $non_req) ? '' : '*'; return $this-field_label ? $req . $this-field_label : ''; } function __toString() { $id = $this-field_name; $label = $this-getLabel(); $field = $this-dropdown(); return 'field_name.'"'.$label.''.$field.''; } } function generateForm ($lang,$country_list,$states_list,$days_of_month,$month_list,$years){ $xx = array( 'form_bod_day' = $days_of_month, 'form_bod_month' = $month_list, 'form_bod_year' = $years); echo $dropDownConnector = new DropDownConnector($xx,'DOB','bod','en-US'); } // Call php class to use class on external functionss. $avInq = new formGenerator; $lang='en-US'; echo generateForm ($lang,$country_list,$states_list,$days_of_month,$month_list,$years);

    Read the article

  • Control XML serialization of Dictionary<K, T>

    - by Luca
    I'm investigating about XML serialization, and since I use lot of dictionary, I would like to serialize them as well. I found the following solution for that (I'm quite proud of it! :) ). [XmlInclude(typeof(Foo))] public class XmlDictionary<TKey, TValue> { /// <summary> /// Key/value pair. /// </summary> public struct DictionaryItem { /// <summary> /// Dictionary item key. /// </summary> public TKey Key; /// <summary> /// Dictionary item value. /// </summary> public TValue Value; } /// <summary> /// Dictionary items. /// </summary> public DictionaryItem[] Items { get { List<DictionaryItem> items = new List<DictionaryItem>(ItemsDictionary.Count); foreach (KeyValuePair<TKey, TValue> pair in ItemsDictionary) { DictionaryItem item; item.Key = pair.Key; item.Value = pair.Value; items.Add(item); } return (items.ToArray()); } set { ItemsDictionary = new Dictionary<TKey,TValue>(); foreach (DictionaryItem item in value) ItemsDictionary.Add(item.Key, item.Value); } } /// <summary> /// Indexer base on dictionary key. /// </summary> /// <param name="key"></param> /// <returns></returns> public TValue this[TKey key] { get { return (ItemsDictionary[key]); } set { Debug.Assert(value != null); ItemsDictionary[key] = value; } } /// <summary> /// Delegate for get key from a dictionary value. /// </summary> /// <param name="value"></param> /// <returns></returns> public delegate TKey GetItemKeyDelegate(TValue value); /// <summary> /// Add a range of values automatically determining the associated keys. /// </summary> /// <param name="values"></param> /// <param name="keygen"></param> public void AddRange(IEnumerable<TValue> values, GetItemKeyDelegate keygen) { foreach (TValue v in values) ItemsDictionary.Add(keygen(v), v); } /// <summary> /// Items dictionary. /// </summary> [XmlIgnore] public Dictionary<TKey, TValue> ItemsDictionary = new Dictionary<TKey,TValue>(); } The classes deriving from this class are serialized in the following way: <FooDictionary xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Items> <DictionaryItemOfInt32Foo> <Key/> <Value/> </DictionaryItemOfInt32XmlProcess> <Items> This give me a good solution, but: How can I control the name of the element DictionaryItemOfInt32Foo What happens if I define a Dictionary<FooInt32, Int32> and I have the classes Foo and FooInt32? Is it possible to optimize the class above? THank you very much!

    Read the article

  • Javascript style objects in Objective-C

    - by awolf
    Background: I use a ton of NSDictionary objects in my iPhone and iPad code. I'm sick of the verbose way of getting/setting keys to these state dictionaries. So a little bit of an experiment: I just created a class I call Remap. Remap will take any arbitrary set[VariableName]:(NSObject *) obj selector and forward that message to a function that will insert obj into an internal NSMutableDictionary under the key [vairableName]. Remap will also take any (zero argument) arbitrary [variableName] selector and return the NSObject mapped in the NSMutableDictionary under the key [variableName]. e.g. Remap * remap = [[Remap alloc] init]; NSNumber * testNumber = [NSNumber numberWithInt:46]; [remap setTestNumber:testNumber]; testNumber = [remap testNumber]; [remap setTestString:@"test string"]; NSString * testString = [remap testString]; NSMutableDictionary * testDict = [NSMutableDictionary dictionaryWithObject:testNumber forKey:@"testNumber"]; [remap setTestDict:testDict]; testDict = [remap testDict]; where none of the properties testNumber, testString, or testDict are actually defined in Remap. The crazy thing? It works... My only question is how can I disable the "may not respond to " warnings for JUST accesses to Remap? P.S. : I'll probably end up scrapping this and going with macros since message forwarding is quite inefficient... but aside from that does anyone see other problems with Remap? Here's Remap's .m for those who are curious: #import "Remap.h" @interface Remap () @property (nonatomic, retain) NSMutableDictionary * _data; @end @implementation Remap @synthesize _data; - (void) dealloc { relnil(_data); [super dealloc]; } - (id) init { self = [super init]; if (self != nil) { NSMutableDictionary * dict = [[NSMutableDictionary alloc] init]; [self set_data:dict]; relnil(dict); } return self; } - (void)forwardInvocation:(NSInvocation *)anInvocation { NSString * selectorName = [NSString stringWithUTF8String: sel_getName([anInvocation selector])]; NSRange range = [selectorName rangeOfString:@"set"]; NSInteger numArguments = [[anInvocation methodSignature] numberOfArguments]; if (range.location == 0 && numArguments == 4) { //setter [anInvocation setSelector:@selector(setData:withKey:)]; [anInvocation setArgument:&selectorName atIndex:3]; [anInvocation invokeWithTarget:self]; } else if (numArguments == 3) { [anInvocation setSelector:@selector(getDataWithKey:)]; [anInvocation setArgument:&selectorName atIndex:2]; [anInvocation invokeWithTarget:self]; } } - (NSMethodSignature *) methodSignatureForSelector:(SEL) aSelector { NSString * selectorName = [NSString stringWithUTF8String: sel_getName(aSelector)]; NSMethodSignature * sig = [super methodSignatureForSelector:aSelector]; if (sig == nil) { NSRange range = [selectorName rangeOfString:@"set"]; if (range.location == 0) { sig = [self methodSignatureForSelector:@selector(setData:withKey:)]; } else { sig = [self methodSignatureForSelector:@selector(getDataWithKey:)]; } } return sig; } - (NSObject *) getDataWithKey: (NSString *) key { NSObject * returnValue = [[self _data] objectForKey:key]; return returnValue; } - (void) setData: (NSObject *) data withKey:(NSString *)key { if (key && [key length] >= 5 && data) { NSRange range; range.length = 1; range.location = 3; NSString * firstChar = [key substringWithRange:range]; firstChar = [firstChar lowercaseString]; range.length = [key length] - 5; // the 4 we have processed plus the training : range.location = 4; NSString * adjustedKey = [NSString stringWithFormat:@"%@%@", firstChar, [key substringWithRange:range]]; [[self _data] setObject:data forKey:adjustedKey]; } else { //assert? } } @end

    Read the article

  • Control XML serialization of generic types

    - by Luca
    I'm investigating about XML serialization, and since I use lot of dictionary, I would like to serialize them as well. I found the following solution for that (I'm quite proud of it! :) ). [XmlInclude(typeof(Foo))] public class XmlDictionary<TKey, TValue> { /// <summary> /// Key/value pair. /// </summary> public struct DictionaryItem { /// <summary> /// Dictionary item key. /// </summary> public TKey Key; /// <summary> /// Dictionary item value. /// </summary> public TValue Value; } /// <summary> /// Dictionary items. /// </summary> public DictionaryItem[] Items { get { List<DictionaryItem> items = new List<DictionaryItem>(ItemsDictionary.Count); foreach (KeyValuePair<TKey, TValue> pair in ItemsDictionary) { DictionaryItem item; item.Key = pair.Key; item.Value = pair.Value; items.Add(item); } return (items.ToArray()); } set { ItemsDictionary = new Dictionary<TKey,TValue>(); foreach (DictionaryItem item in value) ItemsDictionary.Add(item.Key, item.Value); } } /// <summary> /// Indexer base on dictionary key. /// </summary> /// <param name="key"></param> /// <returns></returns> public TValue this[TKey key] { get { return (ItemsDictionary[key]); } set { Debug.Assert(value != null); ItemsDictionary[key] = value; } } /// <summary> /// Delegate for get key from a dictionary value. /// </summary> /// <param name="value"></param> /// <returns></returns> public delegate TKey GetItemKeyDelegate(TValue value); /// <summary> /// Add a range of values automatically determining the associated keys. /// </summary> /// <param name="values"></param> /// <param name="keygen"></param> public void AddRange(IEnumerable<TValue> values, GetItemKeyDelegate keygen) { foreach (TValue v in values) ItemsDictionary.Add(keygen(v), v); } /// <summary> /// Items dictionary. /// </summary> [XmlIgnore] public Dictionary<TKey, TValue> ItemsDictionary = new Dictionary<TKey,TValue>(); } The classes deriving from this class are serialized in the following way: <XmlProcessList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Items> <DictionaryItemOfInt32Foo> <Key/> <Value/> </DictionaryItemOfInt32XmlProcess> <Items> This give me a good solution, but: How can I control the name of the element DictionaryItemOfInt32Foo What happens if I define a Dictionary<FooInt32, Int32> and I have the classes Foo and FooInt32? Is it possible to optimize the class above? THank you very much!

    Read the article

  • Same source, multiple targets with different resources (Visual Studio .Net 2008)

    - by Mike Bell
    A set of software products differ only by their resource strings, binary resources, and by the strings / graphics / product keys used by their Visual Studio Setup projects. What is the best way to create, organize, and maintain them? i.e. All the products essentially consist of the same core functionality customized by graphics, strings, and other resource data to form each product. Imagine you are creating a set of products like "Excel for Bankers", Excel for Gardeners", "Excel for CEOs", etc. Each product has the the same functionality, but differs in name, graphics, help files, included templates etc. The environment in which these are being built is: vanilla Windows.Forms / Visual Studio 2008 / C# / .Net. The ideal solution would be easy to maintain. e.g. If I introduce a new string / new resource projects I haven't added the resource to should fail at compile time, not run time. (And subsequent localization of the products should also be feasible). Hopefully I've missed the blindingly-obvious and easy way of doing all this. What is it? ============ Clarification(s) ================ By "product" I mean the package of software that gets installed by the installer and sold to the end user. Currently I have one solution, consisting of multiple projects, (including a Setup project), which builds a set of assemblies and create a single installer. What I need to produce are multiple products/installers, all with similar functionality, which are built from the same set of assemblies but differ in the set of resources used by one of the assemblies. What's the best way of doing this? ------------ The 95% Solution ----------------- Based upon Daminen_the_unbeliever's answer, a resource file per configuration can be achieved as follows: Create a class library project ("Satellite"). Delete the default .cs file and add a folder ("Default") Create a resource file in the folder "MyResources" Properties - set CustomToolNamespace to something appropriate (e.g. "XXX") Make sure the access modifier for the resources is "Public". Add the resources. Edit the source code. Refer to the resources in your code as XXX.MyResources.ResourceName) Create Configurations for each product variant ("ConfigN") For each product variant, create a folder ("VariantN") Copy and Paste the MyResources file into each VariantN folder Unload the "Satellite" project, and edit the .csproj file For each "VariantN/MyResources" <Compile> or <EmbeddedResource> tag, add a Condition="'$(Configuration)' == 'ConfigN'" attribute. Save, Reload the .csproj, and you're done... This creates a per-configuration resource file, which can (presumably) be further localized. Compile error messages are produced for any configuration that where a a resource is missing. The resource files can be localized using the standard method (create a second resources file (MyResources.fr.resx) and edit .csproj as before). The reason this is a 95% solution is that resources used to initialize forms (e.g. Form Titles, button texts) can't be easily handled in the same manner - the easiest approach seems to be to overwrite these with values from the satellite assembly.

    Read the article

  • Database with 5 Tables with Insert and Select

    - by kirbby
    hi guys, my problem is that i have 5 tables and need inserts and selects. what i did is for every table a class and there i wrote the SQL Statements like this public class Contact private static String IDCont = "id_contact"; private static String NameCont = "name_contact"; private static String StreetCont = "street_contact"; private static String Street2Cont = "street2_contact"; private static String Street3Cont = "street3_contact"; private static String ZipCont = "zip_contact"; private static String CityCont = "city_contact"; private static String CountryCont = "country_contact"; private static String Iso2Cont = "iso2_contact"; private static String PhoneCont = "phone_contact"; private static String Phone2Cont = "phone2_contact"; private static String FaxCont = "fax_contact"; private static String MailCont = "mail_contact"; private static String Mail2Cont = "mail2_contact"; private static String InternetCont = "internet_contact"; private static String DrivemapCont = "drivemap_contact"; private static String PictureCont = "picture_contact"; private static String LatitudeCont = "latitude_contact"; private static String LongitudeCont = "longitude_contact"; public static final String TABLE_NAME = "contact"; public static final String SQL_CREATE = "CREATE TABLE IF NOT EXISTS " + TABLE_NAME + "(" + IDCont + "INTEGER not NULL," + NameCont + " TEXT not NULL," + StreetCont + " TEXT," + Street2Cont + " TEXT," + Street3Cont + " TEXT," + ZipCont + " TEXT," + CityCont + " TEXT," + CountryCont + " TEXT," + Iso2Cont + " TEXT," + PhoneCont + " TEXT," + Phone2Cont + " TEXT," + FaxCont + " TEXT," + MailCont + " TEXT," + Mail2Cont + " TEXT," + InternetCont + " TEXT," + //website of the contact DrivemapCont + " TEXT," + //a link to a drivemap to the contact PictureCont + " TEXT," + //a photo of the contact building (contact is not a person) LatitudeCont + " TEXT," + LongitudeCont + " TEXT," + "primary key(id_contact)" + "foreign key(iso2)"; and my insert looks like this public boolean SQL_INSERT_CONTACT(int IDContIns, String NameContIns, String StreetContIns, String Street2ContIns, String Street3ContIns, String ZipContIns, String CityContIns, String CountryContIns, String Iso2ContIns, String PhoneContIns, String Phone2ContIns, String FaxContIns, String MailContIns, String Mail2ContIns, String InternetContIns, String DrivemapContIns, String PictureContIns, String LatitudeContIns, String LongitudeContIns) { try{ db.execSQL("INSERT INTO " + "contact" + "(" + IDCont + ", " + NameCont + ", " + StreetCont + ", " + Street2Cont + ", " + Street3Cont + ", " + ZipCont + ", " + CityCont + ", " + CountryCont + ", " + Iso2Cont + ", " + PhoneCont + ", " + Phone2Cont + ", " + FaxCont + ", " + MailCont + ", " + Mail2Cont + ", " + InternetCont + ", " + DrivemapCont + ", " + PictureCont + ", " + LatitudeCont + ", " + LongitudeCont + ") " + "VALUES (" + IDContIns + ", " + NameContIns +", " + StreetContIns + ", " + Street2ContIns + ", " + Street3ContIns + ", " + ZipContIns + ", " + CityContIns + ", " + CountryContIns + ", " + Iso2ContIns + ", " + PhoneContIns + ", " + Phone2ContIns + ", " + FaxContIns + ", " + MailContIns + ", " + Mail2ContIns + ", " + InternetContIns + ", " + DrivemapContIns + ", " + PictureContIns + ", " + LatitudeContIns + ", " + LongitudeContIns +")"); return true; } catch (SQLException e) { return false; } } i have a DBAdapter class there i created the database public class DBAdapter { public static final String DB_NAME = "mol.db"; private static final int DB_VERSION = 1; private static final String TAG = "DBAdapter"; //to log private final Context context; private SQLiteDatabase db; public DBAdapter(Context context) { this.context = context; OpenHelper openHelper = new OpenHelper(this.context); this.db = openHelper.getWritableDatabase(); } public static class OpenHelper extends SQLiteOpenHelper { public OpenHelper(Context context) { super(context, DB_NAME, null, DB_VERSION); } @Override public void onCreate(SQLiteDatabase db) { // TODO Auto-generated method stub db.execSQL(Contact.SQL_CREATE); db.execSQL(Country.SQL_CREATE); db.execSQL(Picture.SQL_CREATE); db.execSQL(Product.SQL_CREATE); db.execSQL(Project.SQL_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // TODO Auto-generated method stub Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL(Contact.SQL_DROP); db.execSQL(Country.SQL_DROP); db.execSQL(Picture.SQL_DROP); db.execSQL(Product.SQL_DROP); db.execSQL(Project.SQL_DROP); onCreate(db); } i found so many different things and tried them but i didn't get anything to work... i need to know how can i access the database in my activity and how i can get the insert to work and is there sth wrong in my code? thanks for your help thats how i tried to get it into my activity public class MainTabActivity extends TabActivity { private Context context; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.maintabactivity); TabHost mTabHost = getTabHost(); Intent intent1 = new Intent().setClass(this,MapOfLight.class); //Intent intent2 = new Intent().setClass(this,Test.class); //Testactivity //Intent intent2 = new Intent().setClass(this,DetailView.class); //DetailView Intent intent2 = new Intent().setClass(this,ObjectList.class); //ObjectList //Intent intent2 = new Intent().setClass(this,Gallery.class); //Gallery Intent intent3 = new Intent().setClass(this,ContactDetail.class); mTabHost.addTab(mTabHost.newTabSpec("tab_mol").setIndicator(this.getText(R.string.mol), getResources().getDrawable(R.drawable.ic_tab_mol)).setContent(intent1)); mTabHost.addTab(mTabHost.newTabSpec("tab_highlights").setIndicator(this.getText(R.string.highlights),getResources().getDrawable(R.drawable.ic_tab_highlights)).setContent(intent2)); mTabHost.addTab(mTabHost.newTabSpec("tab_contacts").setIndicator(this.getText(R.string.contact),getResources().getDrawable(R.drawable.ic_tab_contact)).setContent(intent3)); mTabHost.setCurrentTab(1); SQLiteDatabase db; DBAdapter dh = null; OpenHelper openHelper = new OpenHelper(this.context); dh = new DBAdapter(this); db = openHelper.getWritableDatabase(); dh.SQL_INSERT_COUNTRY("AT", "Austria", "AUT"); } } i tried it with my country table because it has only 3 columns public class Country { private static String Iso2Count = "iso2_country"; private static String NameCount = "name_country"; private static String FlagCount = "flag_image_url_country"; public static final String TABLE_NAME = "country"; public static final String SQL_CREATE = "CREATE TABLE IF NOT EXISTS " + TABLE_NAME + "(" + Iso2Count + " TEXT not NULL," + NameCount + " TEXT not NULL," + FlagCount + " TEXT not NULL," + "primary key(iso2_country)"; public boolean SQL_INSERT_COUNTRY(String Iso2CountIns, String NameCountIns, String FlagCountIns) { try{ db.execSQL("INSERT INTO " + "country" + "(" + Iso2Count + ", " + NameCount + ", " + FlagCount + ") " + "VALUES ( " + Iso2CountIns + ", " + NameCountIns +", " + FlagCountIns + " )"); return true; } catch (SQLException e) { return false; } } another question is it better to put the insert and select from each table into a separate class, so i have 1 class for each table or put them all into the DBAdapter class?

    Read the article

  • How Can I Set Up a "Where" Statement with a PHP Array

    - by Ryan
    Am I able to apply "where" statements to PHP arrays, similar to how I would be able to apply a "where" statement to a MySQL query? For example, suppose I have the following array: $recordset = array( array('host' => 1, 'country' => 'fr', 'year' => 2010, 'month' => 1, 'clicks' => 123, 'users' => 4), array('host' => 1, 'country' => 'fr', 'year' => 2010, 'month' => 2, 'clicks' => 134, 'users' => 5), array('host' => 1, 'country' => 'fr', 'year' => 2010, 'month' => 3, 'clicks' => 341, 'users' => 2), array('host' => 1, 'country' => 'es', 'year' => 2010, 'month' => 1, 'clicks' => 113, 'users' => 4), array('host' => 1, 'country' => 'es', 'year' => 2010, 'month' => 2, 'clicks' => 234, 'users' => 5), array('host' => 1, 'country' => 'es', 'year' => 2010, 'month' => 3, 'clicks' => 421, 'users' => 2), array('host' => 1, 'country' => 'es', 'year' => 2010, 'month' => 4, 'clicks' => 22, 'users' => 3), array('host' => 2, 'country' => 'es', 'year' => 2010, 'month' => 1, 'clicks' => 111, 'users' => 2), array('host' => 2, 'country' => 'es', 'year' => 2010, 'month' => 2, 'clicks' => 2, 'users' => 4), array('host' => 3, 'country' => 'es', 'year' => 2010, 'month' => 3, 'clicks' => 34, 'users' => 2), array('host' => 3, 'country' => 'es', 'year' => 2010, 'month' => 4, 'clicks' => 1, 'users' => 1),); How can I limit the output to only show the keys and values related to 'host' 1 and 'country' fr? Any help would be great.

    Read the article

  • Dynamic JSON Parsing in .NET with JsonValue

    - by Rick Strahl
    So System.Json has been around for a while in Silverlight, but it's relatively new for the desktop .NET framework and now moving into the lime-light with the pending release of ASP.NET Web API which is bringing a ton of attention to server side JSON usage. The JsonValue, JsonObject and JsonArray objects are going to be pretty useful for Web API applications as they allow you dynamically create and parse JSON values without explicit .NET types to serialize from or into. But even more so I think JsonValue et al. are going to be very useful when consuming JSON APIs from various services. Yes I know C# is strongly typed, why in the world would you want to use dynamic values? So many times I've needed to retrieve a small morsel of information from a large service JSON response and rather than having to map the entire type structure of what that service returns, JsonValue actually allows me to cherry pick and only work with the values I'm interested in, without having to explicitly create everything up front. With JavaScriptSerializer or DataContractJsonSerializer you always need to have a strong type to de-serialize JSON data into. Wouldn't it be nice if no explicit type was required and you could just parse the JSON directly using a very easy to use object syntax? That's exactly what JsonValue, JsonObject and JsonArray accomplish using a JSON parser and some sweet use of dynamic sauce to make it easy to access in code. Creating JSON on the fly with JsonValue Let's start with creating JSON on the fly. It's super easy to create a dynamic object structure. JsonValue uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JsonValue:[TestMethod] public void JsonValueOutputTest() { // strong type instance var jsonObject = new JsonObject(); // dynamic expando instance you can add properties to dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1977; album.Songs = new JsonArray() as dynamic; dynamic song = new JsonObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JsonObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces proper JSON just as you would expect: {"AlbumName":"Dirty Deeds Done Dirt Cheap","Artist":"AC\/DC","YearReleased":1977,"Songs":[{"SongName":"Dirty Deeds Done Dirt Cheap","SongLength":"4:11"},{"SongName":"Love at First Feel","SongLength":"3:10"}]} The important thing about this code is that there's no explicitly type that is used for holding the values to serialize to JSON. I am essentially creating this value structure on the fly by adding properties and then serialize it to JSON. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JsonObject() to create a new object and immediately cast it to dynamic. JsonObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JsonValue/JsonObject these values are stored in pseudo collections of key value pairs that are exposed as properties through the DynamicObject functionality in .NET. The syntax gets a little tedious only if you need to create child objects or arrays that have to be explicitly defined first. Other than that the syntax looks like normal object access sytnax. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the values you create are accessed consistently and without typos in your code. Note that you can also access the JsonValue instance directly and get access to the underlying type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JsonObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JsonValue internally stores properties keys and values in collections and you can iterate over them at runtime. You can also manipulate the collections if you need to to get the object structure to look exactly like you want. Again, if you've used ExpandoObject before JsonObject/Value are very similar in the behavior of the structure. Reading JSON strings into JsonValue The JsonValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JsonValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:[TestMethod] public void JsonValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"",""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JsonValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JsonValue object and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JsonPrimitive and I have to assign them to their appropriate types first before I can do type comparisons. The dynamic properties will automatically cast to the right type expected as long as the compiler can resolve the type of the assignment or usage. The AreEqual() method oesn't as it expects two object instances and comparing json.Company to "West Wind" is comparing two different types (JsonPrimitive to String) which fails. So the intermediary assignment is required to make the test pass. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1977, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B00008BXJ4/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""67280fb8"", ""AlbumName"": ""Echoes, Silence, Patience & Grace"", ""Artist"": ""Foo Fighters"", ""YearReleased"": 2007, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/41mtlesQPVL._SL500_AA280_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B000UFAURI/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000UFAURI"", ""Songs"": [ { ""AlbumId"": ""67280fb8"", ""SongName"": ""The Pretender"", ""SongLength"": ""4:29"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Let it Die"", ""SongLength"": ""4:05"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Erase/Replay"", ""SongLength"": ""4:13"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; dynamic albums = JsonValue.Parse(jsonString); foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName ); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName);}   It's pretty sweet how easy it becomes to parse even complex JSON and then just run through the object using object syntax, yet without an explicit type in the mix. In fact it looks and feels a lot like if you were using JavaScript to parse through this data, doesn't it? And that's the point…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  JSON   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Differences Between NHibernate and Entity Framework

    - by Ricardo Peres
    Introduction NHibernate and Entity Framework are two of the most popular O/RM frameworks on the .NET world. Although they share some functionality, there are some aspects on which they are quite different. This post will describe this differences and will hopefully help you get started with the one you know less. Mind you, this is a personal selection of features to compare, it is by no way an exhaustive list. History First, a bit of history. NHibernate is an open-source project that was first ported from Java’s venerable Hibernate framework, one of the first O/RM frameworks, but nowadays it is not tied to it, for example, it has .NET specific features, and has evolved in different ways from those of its Java counterpart. Current version is 3.3, with 3.4 on the horizon. It currently targets .NET 3.5, but can be used as well in .NET 4, it only makes no use of any of its specific functionality. You can find its home page at NHForge. Entity Framework 1 came out with .NET 3.5 and is now on its second major version, despite being version 4. Code First sits on top of it and but came separately and will also continue to be released out of line with major .NET distributions. It is currently on version 4.3.1 and version 5 will be released together with .NET Framework 4.5. All versions will target the current version of .NET, at the time of their release. Its home location is located at MSDN. Architecture In NHibernate, there is a separation between the Unit of Work and the configuration and model instances. You start off by creating a Configuration object, where you specify all global NHibernate settings such as the database and dialect to use, the batch sizes, the mappings, etc, then you build an ISessionFactory from it. The ISessionFactory holds model and metadata that is tied to a particular database and to the settings that came from the Configuration object, and, there will typically be only one instance of each in a process. Finally, you create instances of ISession from the ISessionFactory, which is the NHibernate representation of the Unit of Work and Identity Map. This is a lightweight object, it basically opens and closes a database connection as required and keeps track of the entities associated with it. ISession objects are cheap to create and dispose, because all of the model complexity is stored in the ISessionFactory and Configuration objects. As for Entity Framework, the ObjectContext/DbContext holds the configuration, model and acts as the Unit of Work, holding references to all of the known entity instances. This class is therefore not lightweight as its NHibernate counterpart and it is not uncommon to see examples where an instance is cached on a field. Mappings Both NHibernate and Entity Framework (Code First) support the use of POCOs to represent entities, no base classes are required (or even possible, in the case of NHibernate). As for mapping to and from the database, NHibernate supports three types of mappings: XML-based, which have the advantage of not tying the entity classes to a particular O/RM; the XML files can be deployed as files on the file system or as embedded resources in an assembly; Attribute-based, for keeping both the entities and database details on the same place at the expense of polluting the entity classes with NHibernate-specific attributes; Strongly-typed code-based, which allows dynamic creation of the model and strongly typing it, so that if, for example, a property name changes, the mapping will also be updated. Entity Framework can use: Attribute-based (although attributes cannot express all of the available possibilities – for example, cascading); Strongly-typed code mappings. Database Support With NHibernate you can use mostly any database you want, including: SQL Server; SQL Server Compact; SQL Server Azure; Oracle; DB2; PostgreSQL; MySQL; Sybase Adaptive Server/SQL Anywhere; Firebird; SQLLite; Informix; Any through OLE DB; Any through ODBC. Out of the box, Entity Framework only supports SQL Server, but a number of providers exist, both free and commercial, for some of the most used databases, such as Oracle and MySQL. See a list here. Inheritance Strategies Both NHibernate and Entity Framework support the three canonical inheritance strategies: Table Per Type Hierarchy (Single Table Inheritance), Table Per Type (Class Table Inheritance) and Table Per Concrete Type (Concrete Table Inheritance). Associations Regarding associations, both support one to one, one to many and many to many. However, NHibernate offers far more collection types: Bags of entities or values: unordered, possibly with duplicates; Lists of entities or values: ordered, indexed by a number column; Maps of entities or values: indexed by either an entity or any value; Sets of entities or values: unordered, no duplicates; Arrays of entities or values: indexed, immutable. Querying NHibernate exposes several querying APIs: LINQ is probably the most used nowadays, and really does not need to be introduced; Hibernate Query Language (HQL) is a database-agnostic, object-oriented SQL-alike language that exists since NHibernate’s creation and still offers the most advanced querying possibilities; well suited for dynamic queries, even if using string concatenation; Criteria API is an implementation of the Query Object pattern where you create a semi-abstract conceptual representation of the query you wish to execute by means of a class model; also a good choice for dynamic querying; Query Over offers a similar API to Criteria, but using strongly-typed LINQ expressions instead of strings; for this, although more refactor-friendlier that Criteria, it is also less suited for dynamic queries; SQL, including stored procedures, can also be used; Integration with Lucene.NET indexer is available. As for Entity Framework: LINQ to Entities is fully supported, and its implementation is considered very complete; it is the API of choice for most developers; Entity-SQL, HQL’s counterpart, is also an object-oriented, database-independent querying language that can be used for dynamic queries; SQL, of course, is also supported. Caching Both NHibernate and Entity Framework, of course, feature first-level cache. NHibernate also supports a second-level cache, that can be used among multiple ISessionFactorys, even in different processes/machines: Hashtable (in-memory); SysCache (uses ASP.NET as the cache provider); SysCache2 (same as above but with support for SQL Server SQL Dependencies); Prevalence; SharedCache; Memcached; Redis; NCache; Appfabric Caching. Out of the box, Entity Framework does not have any second-level cache mechanism, however, there are some public samples that show how we can add this. ID Generators NHibernate supports different ID generation strategies, coming from the database and otherwise: Identity (for SQL Server, MySQL, and databases who support identity columns); Sequence (for Oracle, PostgreSQL, and others who support sequences); Trigger-based; HiLo; Sequence HiLo (for databases that support sequences); Several GUID flavors, both in GUID as well as in string format; Increment (for single-user uses); Assigned (must know what you’re doing); Sequence-style (either uses an actual sequence or a single-column table); Table of ids; Pooled (similar to HiLo but stores high values in a table); Native (uses whatever mechanism the current database supports, identity or sequence). Entity Framework only supports: Identity generation; GUIDs; Assigned values. Properties NHibernate supports properties of entity types (one to one or many to one), collections (one to many or many to many) as well as scalars and enumerations. It offers a mechanism for having complex property types generated from the database, which even include support for querying. It also supports properties originated from SQL formulas. Entity Framework only supports scalars, entity types and collections. Enumerations support will come in the next version. Events and Interception NHibernate has a very rich event model, that exposes more than 20 events, either for synchronous pre-execution or asynchronous post-execution, including: Pre/Post-Load; Pre/Post-Delete; Pre/Post-Insert; Pre/Post-Update; Pre/Post-Flush. It also features interception of class instancing and SQL generation. As for Entity Framework, only two events exist: ObjectMaterialized (after loading an entity from the database); SavingChanges (before saving changes, which include deleting, inserting and updating). Tracking Changes For NHibernate as well as Entity Framework, all changes are tracked by their respective Unit of Work implementation. Entities can be attached and detached to it, Entity Framework does, however, also support self-tracking entities. Optimistic Concurrency Control NHibernate supports all of the imaginable scenarios: SQL Server’s ROWVERSION; Oracle’s ORA_ROWSCN; A column containing date and time; A column containing a version number; All/dirty columns comparison. Entity Framework is more focused on Entity Framework, so it only supports: SQL Server’s ROWVERSION; Comparing all/some columns. Batching NHibernate has full support for insertion batching, but only if the ID generator in use is not database-based (for example, it cannot be used with Identity), whereas Entity Framework has no batching at all. Cascading Both support cascading for collections and associations: when an entity is deleted, their conceptual children are also deleted. NHibernate also offers the possibility to set the foreign key column on children to NULL instead of removing them. Flushing Changes NHibernate’s ISession has a FlushMode property that can have the following values: Auto: changes are sent to the database when necessary, for example, if there are dirty instances of an entity type, and a query is performed against this entity type, or if the ISession is being disposed; Commit: changes are sent when committing the current transaction; Never: changes are only sent when explicitly calling Flush(). As for Entity Framework, changes have to be explicitly sent through a call to AcceptAllChanges()/SaveChanges(). Lazy Loading NHibernate supports lazy loading for Associated entities (one to one, many to one); Collections (one to many, many to many); Scalar properties (thing of BLOBs or CLOBs). Entity Framework only supports lazy loading for: Associated entities; Collections. Generating and Updating the Database Both NHibernate and Entity Framework Code First (with the Migrations API) allow creating the database model from the mapping and updating it if the mapping changes. Extensibility As you can guess, NHibernate is far more extensible than Entity Framework. Basically, everything can be extended, from ID generation, to LINQ to SQL transformation, HQL native SQL support, custom column types, custom association collections, SQL generation, supported databases, etc. With Entity Framework your options are more limited, at least, because practically no information exists as to what can be extended/changed. It features a provider model that can be extended to support any database. Integration With Other Microsoft APIs and Tools When it comes to integration with Microsoft technologies, it will come as no surprise that Entity Framework offers the best support. For example, the following technologies are fully supported: ASP.NET (through the EntityDataSource); ASP.NET Dynamic Data; WCF Data Services; WCF RIA Services; Visual Studio (through the integrated designer). Documentation This is another point where Entity Framework is superior: NHibernate lacks, for starters, an up to date API reference synchronized with its current version. It does have a community mailing list, blogs and wikis, although not much used. Entity Framework has a number of resources on MSDN and, of course, several forums and discussion groups exist. Conclusion Like I said, this is a personal list. I may come as a surprise to some that Entity Framework is so behind NHibernate in so many aspects, but it is true that NHibernate is much older and, due to its open-source nature, is not tied to product-specific timeframes and can thus evolve much more rapidly. I do like both, and I chose whichever is best for the job I have at hands. I am looking forward to the changes in EF5 which will add significant value to an already interesting product. So, what do you think? Did I forget anything important or is there anything else worth talking about? Looking forward for your comments!

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Visual Studio 2010 and .NET 4 Released

    - by ScottGu
    The final release of Visual Studio 2010 and .NET 4 is now available. Download and Install Today MSDN subscribers, as well as WebsiteSpark/BizSpark/DreamSpark members, can now download the final releases of Visual Studio 2010 and TFS 2010 through the MSDN subscribers download center.  If you are not an MSDN Subscriber, you can download free 90-day trial editions of Visual Studio 2010.  Or you can can download the free Visual Studio express editions of Visual Web Developer 2010, Visual Basic 2010, Visual C# 2010 and Visual C++.  These express editions are available completely for free (and never time out).  If you are looking for an easy way to setup a new machine for web-development you can automate installing ASP.NET 4, ASP.NET MVC 2, IIS, SQL Server Express and Visual Web Developer 2010 Express really quickly with the Microsoft Web Platform Installer (just click the install button on the page). What is new with VS 2010 and .NET 4 Today’s release is a big one – and brings with it a ton of new feature and capabilities. One of the things we tried hard to focus on with this release was to invest heavily in making existing applications, projects and developer experiences better.  What this means is that you don’t need to read 1000+ page books or spend time learning major new concepts in order to take advantage of the release.  There are literally thousands of improvements (both big and small) that make you more productive and successful without having to learn big new concepts in order to start using them.  Below is just a small sampling of some of the improvements with this release: Visual Studio 2010 IDE  Visual Studio 2010 now supports multiple-monitors (enabling much better use of screen real-estate).  It has new code Intellisense support that makes it easier to find and use classes and methods. It has improved code navigation support for searching code-bases and seeing how code is called and used.  It has new code visualization support that allows you to see the relationships across projects and classes within projects, as well as to automatically generate sequence diagrams to chart execution flow.  The editor now supports HTML and JavaScript snippet support as well as improved JavaScript intellisense. The VS 2010 Debugger and Profiling support is now much, much richer and enables new features like Intellitrace (aka Historical Debugging), debugging of Crash/Dump files, and better parallel debugging.  VS 2010’s multi-targeting support is now much richer, and enables you to use VS 2010 to target .NET 2, .NET 3, .NET 3.5 and .NET 4 applications.  And the infamous Add Reference dialog now loads much faster. TFS 2010 is now easy to setup (you can now install the server in under 10 minutes) and enables great source-control, bug/work-item tracking, and continuous integration support.  Testing (both automated and manual) is now much, much richer.  And VS 2010 Premium and Ultimate provide much richer architecture and design tooling support. VB and C# Language Features VB and C# in VS 2010 both contain a bunch of new features and capabilities.  VB adds new support for automatic properties, collection initializers, and implicit line continuation support among many other features.  C# adds support for optional parameters and named arguments, a new dynamic keyword, co-variance and contra-variance, and among many other features. ASP.NET 4 and ASP.NET MVC 2 With ASP.NET 4, Web Forms controls now render clean, semantically correct, and CSS friendly HTML markup. Built-in URL routing functionality allows you to expose clean, search engine friendly, URLs and increase the traffic to your Website.  ViewState within applications can now be more easily controlled and made smaller.  ASP.NET Dynamic Data support has been expanded.  More controls, including rich charting and data controls, are now built-into ASP.NET 4 and enable you to build applications even faster.  New starter project templates now make it easier to get going with new projects.  SEO enhancements make it easier to drive traffic to your public facing sites.  And web.config files are now clean and simple. ASP.NET MVC 2 is now built-into VS 2010 and ASP.NET 4, and provides a great way to build web sites and applications using a model-view-controller based pattern. ASP.NET MVC 2 adds features to easily enable client and server validation logic, provides new strongly-typed HTML and UI-scaffolding helper methods.  It also enables more modular/reusable applications.  The new <%: %> syntax in ASP.NET makes it easier to HTML encode output.  Visual Studio 2010 also now includes better tooling support for unit testing and TDD.  In particular, “Consume first intellisense” and “generate from usage" support within VS 2010 make it easier to write your unit tests first, and then drive your implementation from them. Deploying ASP.NET applications gets a lot easier with this release. You can now publish your Websites and applications to a staging or production server from within Visual Studio itself. Visual Studio 2010 makes it easy to transfer all your files, code, configuration, database schema and data in one complete package. VS 2010 also makes it easy to manage separate web.config configuration files settings depending upon whether you are in debug, release, staging or production modes. WPF 4 and Silverlight 4 WPF 4 includes a ton of new improvements and capabilities including more built-in controls, richer graphics features (cached composition, pixel shader 3 support, layoutrounding, and animation easing functions), a much improved text stack (with crisper text rendering, custom dictionary support, and selection and caret brush options).  WPF 4 also includes a bunch of support to enable you to take advantage of new Windows 7 features – including multi-touch and Windows 7 shell integration. Silverlight 4 will launch this week as well.  You can watch my Silverlight 4 launch keynote streamed live Tuesday (April 13th) at 8am Pacific Time.  Silverlight 4 includes a ton of new capabilities – including a bunch for making it possible to build great business applications and out of the browser applications.  I’ll be doing a separate blog post later this week (once it is live on the web) that talks more about its capabilities. Visual Studio 2010 now includes great tooling support for both WPF and Silverlight.  The new VS 2010 WPF and Silverlight designer makes it much easier to build client applications as well as build great line of business solutions, as well as integrate and bind with data.  Tooling support for Silverlight 4 with the final release of Visual Studio 2010 will be available when Silverlight 4 releases to the web this week. SharePoint and Azure Visual Studio 2010 now includes built-in support for building SharePoint applications.  You can now create, edit, build, and debug SharePoint applications directly within Visual Studio 2010.  You can also now use SharePoint with TFS 2010. Support for creating Azure-hosted applications is also now included with VS 2010 – allowing you to build ASP.NET and WCF based applications and host them within the cloud. Data Access Data access has a lot of improvements coming to it with .NET 4.  Entity Framework 4 includes a ton of new features and capabilities – including support for model first and POCO development, default support for lazy loading, built-in support for pluralization/singularization of table/property names within the VS 2010 designer, full support for all the LINQ operators, the ability to optionally expose foreign keys on model objects (useful for some stateless web scenarios), disconnected API support to better handle N-Tier and stateless web scenarios, and T4 template customization support within VS 2010 to allow you to customize and automate how code is generated for you by the data designer.  In addition to improvements with the Entity Framework, LINQ to SQL with .NET 4 also includes a bunch of nice improvements.  WCF and Workflow WCF includes a bunch of great new capabilities – including better REST, activation and configuration support.  WCF Data Services (formerly known as Astoria) and WCF RIA Services also now enable you to easily expose and work with data from remote clients. Windows Workflow is now much faster, includes flowchart services, and now makes it easier to make custom services than before.  More details can be found here. CLR and Core .NET Library Improvements .NET 4 includes the new CLR 4 engine – which includes a lot of nice performance and feature improvements.  CLR 4 engine now runs side-by-side in-process with older versions of the CLR – allowing you to use two different versions of .NET within the same process.  It also includes improved COM interop support.  The .NET 4 base class libraries (BCL) include a bunch of nice additions and refinements.  In particular, the .NET 4 BCL now includes new parallel programming support that makes it much easier to build applications that take advantage of multiple CPUs and cores on a computer.  This work dove-tails nicely with the new VS 2010 parallel debugger (making it much easier to debug parallel applications), as well as the new F# functional language support now included in the VS 2010 IDE.  .NET 4 also now also has the Dynamic Language Runtime (DLR) library built-in – which makes it easier to use dynamic language functionality with .NET.  MEF – a really cool library that enables rich extensibility – is also now built-into .NET 4 and included as part of the base class libraries.  .NET 4 Client Profile The download size of the .NET 4 redist is now much smaller than it was before (the x86 full .NET 4 package is about 36MB).  We also now have a .NET 4 Client Profile package which is a pure sub-set of the full .NET that can be used to streamline client application installs. C++ VS 2010 includes a bunch of great improvements for C++ development.  This includes better C++ Intellisense support, MSBuild support for projects, improved parallel debugging and profiler support, MFC improvements, and a number of language features and compiler optimizations. My VS 2010 and .NET 4 Blog Series I’ve been cranking away on a blog series the last few months that highlights many of the new VS 2010 and .NET 4 improvements.  The good news is that I have about 20 in-depth posts already written.  The bad news (for me) is that I have about 200 more to go until I’m done!  I’m going to try and keep adding a few more each week over the next few months to discuss the new improvements and how best to take advantage of them. Below is a list of the already written ones that you can check out today: Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 Stay tuned to my blog as I post more.  Also check out this page which links to a bunch of great articles and videos done by others. VS 2010 Installation Notes If you have installed a previous version of VS 2010 on your machine (either the beta or the RC) you must first uninstall it before installing the final VS 2010 release.  I also recommend uninstalling .NET 4 betas (including both the client and full .NET 4 installs) as well as the other installs that come with VS 2010 (e.g. ASP.NET MVC 2 preview builds, etc).  The uninstalls of the betas/RCs will clean up all the old state on your machine – after which you can install the final VS 2010 version and should have everything just work (this is what I’ve done on all of my machines and I haven’t had any problems). The VS 2010 and .NET 4 installs add a bunch of new managed assemblies to your machine.  Some of these will be “NGEN’d” to native code during the actual install process (making them run fast).  To avoid adding too much time to VS setup, though, we don’t NGEN all assemblies immediately – and instead will NGEN the rest in the background when your machine is idle.  Until it finishes NGENing the assemblies they will be JIT’d to native code the first time they are used in a process – which for large assemblies can sometimes cause a slight performance hit. If you run into this you can manually force all assemblies to be NGEN’d to native code immediately (and not just wait till the machine is idle) by launching the Visual Studio command line prompt from the Windows Start Menu (Microsoft Visual Studio 2010->Visual Studio Tools->Visual Studio Command Prompt).  Within the command prompt type “Ngen executequeueditems” – this will cause everything to be NGEN’d immediately. How to Buy Visual Studio 2010 You can can download and use the free Visual Studio express editions of Visual Web Developer 2010, Visual Basic 2010, Visual C# 2010 and Visual C++.  These express editions are available completely for free (and never time out). You can buy a new copy of VS 2010 Professional that includes a 1 year subscription to MSDN Essentials for $799.  MSDN Essentials includes a developer license of Windows 7 Ultimate, Windows Server 2008 R2 Enterprise, SQL Server 2008 DataCenter R2, and 20 hours of Azure hosting time.  Subscribers also have access to MSDN’s Online Concierge, and Priority Support in MSDN Forums. Upgrade prices from previous releases of Visual Studio are also available.  Existing Visual Studio 2005/2008 Standard customers can upgrade to Visual Studio 2010 Professional for a special $299 retail price until October.  You can take advantage of this VS Standard->Professional upgrade promotion here. Web developers who build applications for others, and who are either independent developers or who work for companies with less than 10 employees, can also optionally take advantage of the Microsoft WebSiteSpark program.  This program gives you three copies of Visual Studio 2010 Professional, 1 copy of Expression Studio, and 4 CPU licenses of both Windows 2008 R2 Web Server and SQL 2008 Web Edition that you can use to both develop and deploy applications with at no cost for 3 years.  At the end of the 3 years there is no obligation to buy anything.  You can sign-up for WebSiteSpark today in under 5 minutes – and immediately have access to the products to download. Summary Today’s release is a big one – and has a bunch of improvements for pretty much every developer.  Thank you everyone who provided feedback, suggestions and reported bugs throughout the development process – we couldn’t have delivered it without you.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Friday, May 14, 2010

    CodePlex Daily Summary for Friday, May 14, 2010New ProjectsCampfire#: Campfire# is a campfire client written in .NET 4.0 using WPF, which uses the Campfire API.CHESS: Systematic Concurrency Testing: CHESS is a tool for systematic and disciplined concurrency testing. Given a concurrent test, CHESS systematically enumerates the possible thread sc...cmpp: cmppcycloid: Arcanoid gameDotNetNuke® C#: The DotNetNuke® project is developed and maintained on a Visual Basic codebase, however a C# version has always been a popular request. This is a ...EasyBuildingCMS.NET: EasyBuildingCMS is an easy use content management system.fluidCMS: Provide for flexible management of web content that is not tightly integrated with the layout and rendering of sites that consume the content.Golem: An automation tool oriented to localization engineering environmentHB Batch Encoder Mk 2: HandBrake Batch Encoder Mk II This Program was adapted from an original project downloaded from codeplex by the name of "Handbrake Batch Encoder"...Integrating Social Media Networks: This is part of my pos graduation project.Ketonic: The Ketonic project aims to improve development of websites based on the Kentico CMS. LinkSharp: LinkSharp is a short-URL provider that can be used to generate short static non changing URL's. The web interface allows you to easily add / edit /...PUC NET (C++ Network Library - PUC Minas): This is an Academic Library for an Easy Development of Applications and Games based on Network Communication.Regular Expression Tester: Small utility for testing regular expressionsSharePoint User Management WebPart: SharePoint User Management WebPartSharpBox: SharpBox makes it easier for .NET developers to interact with existing cloud storage service, e.g. DropBox or Amazon S3Snipivit: Snipivit is a snippet manager service and VS2010 plugin that allows small development teams to store all their code snippets on a central database,...Software Factories Applied: Software Factories Applied is a project collecting the companion bits for the eponymous book to be published by Wiley & Sons in 2011. The authors ...The Ping Master: A service that periodically pings network addresses and allows the running of command line type utilities in response to success or failure.Title Safe Region Checker: A simple utility for XNA developers to check screenshots from games intended for release on the LIVE Marketplace for "title safe" region compliance...Trial project: sky is blueUyghur Named Date: Generate Uyghur named date string. ئۇيغۇرچە ئاي ناملىق چىسلا ھاسىل قىلىشWildcard Search Web Part for SharePoint 2010: The Wildcard Search web part for MOSS 2007 was wildly successful. Although, SharePoint 2010 has built-in wildcard searching functionality, the out...在线Office控件 Online Offical Control: 在线Office控件软件作品发布平台: SoftwarePublishPlatform 软件作品发布平台New ReleasesDemina: Demina Binaries version 0.1: Demina binaries are now available. This release (version 0.1) is an alpha version. Please report any bugs for extermination.EasyTFS: EasyTfs 1.0 Beta 2: Added cache refreshing when contents are updated rather than just every 10 minutes. Added window title based on currently-open case. Added attachme...Extending C# editor - Outlining, classification: Initial release: Initial releaseHB Batch Encoder Mk 2: HB Batch Encoder Mk2 v1.01: Binary release files.HB Batch Encoder Mk 2: Source Code: Source CodeHobbyBrew Mobile: Beta 2: Corretti numerosi bug, data un implementazione "approssimativa" del riscaldamento per Infusione. Aggiornamento consigliato!HouseFly controls: HouseFly controls beta 1.0.2.0: HouseFly controls relase 1.0.2.0 betaHtml Reader: Beta 2: I fixed a bug in HtmlElementCollection, Which exposed an integer enumerator, instead of enumerating through HtmlElements. I added a WPF Window tha...Html to OpenXml: HtmlToOpenXml 1.2: Fix some reported bug. See change set for description. The dll library to include in your project. The dll is signed for GAC support. Compiled wi...Infection Protection: Infection Protection 0.1: This is the final version of Infection Protection that was entered into the 2010 OGPC game competition.Jobping Url Shortener: Deploy Code 0.5.1: Deployment code for Version 0.5 This version includes our Jobping style.Jobping Url Shortener: Source Code 0.5.1: Source code for the 0.5 release. This release includes our Jobping style skin.Kooboo HTML form: Kooboo HTML form module 2.1.0.1: HTML form module contributed by member aledelgo. Add SMTP user and password authentication.KooBoo Image Galery: Beta 2: This new version corrects some issues pointed by Guoqi Zheng Some schema and folders were renamed, so it's better to uninstall the module and remo...MFCMAPI: May 2010 Release: If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. Build: 6.0.0.1020 The 64 bit bu...MVC Turbine: Release 2.1 for MVC2: This RTM contains the same features as v2.0 RTM plus these features: Instance Registration to IServiceLocator You can now add an instance of a typ...NazTek.Extension.Clr4: NazTek.Extension.Clr4 Binary: Binary releaseNazTek.Extension.Clr4: NazTek.Extension.Clr4 Source: Cab with source codeNSIS Autorun: NSIS Autorun 0.1.8: This release includes source code, executable binaries and example materials.Ottawa IT Day: 2010 Source Code and Presentations: During the Ottawa IT Day 2010, some of the presenters shared their code (and some presentations). This release is the culmination of all those effo...PHPWord: PHPWord 0.6.1 Beta: Changelog: Fixed Error when adding a JPEG image and opening in office 2007 Issue #1 Fixed Already defined constant PHPWORD_BASE_PATH Issue #2 F...Rapid Dictionary: Rapid Dictionary Alpha 2.0: Release Notes * Try auto updatable version: http://install.rapiddict.com/index.html Rapid Dictionary Alpha 2.0 includes such functionality: ...Shake - C# Make: Shake v0.1.18: Core changes. Process wrapper class, console logger, etc.SharpBox: SharpBox-Trunk: This is the SharpBox build from the trunk source branch!SharpBox: SharpBox-Trunk-Initial-Source: The initial source code, will be updated from time to timeSpackle.NET: 4.0.0.0 Release: This new drop contains the following A CreateBigInteger() method on SecureRandom to create random BigInteger values. An extension method to prop...StreamInsight example queries, input adapters and output adapters: StreamInsight Examples for V1.0 RTM: Zipped source code.The Ping Master: v0.1.0.0: Early release of The Ping Master for test purposes. Configuration tool is unfinished and does not include an installer.Title Safe Region Checker: Title Safe Region Checker v1.0.0.1: Release 1.0 of Title Safe Region Checker. No known bugs or problems. File is a zipped directory containing the necessary installation files.TortoiseHg: TortoiseHg 1.0.3: This is a bug fix release, we recommend all users upgrade to 1.0.3Usa*Usa Libraly: Smart.Windows.Navigation 0.4: Smart.Windows.Navigation simple navigation library ver 0.4.0. Include Windows Forms & Compact Framework samples. Information - Smart.Windows.Mvc ...VCC: Latest build, v2.1.30513.0: Automatic drop of latest buildWabbitStudio Z80 Software Tools: Wabbitcode: Wabbitcode is an Z80 Assembly IDE for Windows, OS X, and Linux. Built to take full advantage of the features of SPASM and Wabbitemu, Wabbitcode has...white: Release 0.20: Source Code: https://white-project.googlecode.com/svn/tags/0.20 Add few more keyboard keys like windows button and F13-F24. Fixed bugs for keyboar...Wildcard Search Web Part for SharePoint 2010: Version 1.0 Release 1: This is the initial release of the Wildcard Search Web Part for SharePoint 2010. All queries will be issued as wildcards unless disabled with the ...Windows Azure Command-line Tools for PHP Developers: Windows Azure Command-line Tools May 2010 Update: May 2010 Update – May 13, 2010 We are pleased to announced the May 2010 update of Windows Azure Command-Line Tools. In addition to bug fixes and i...WinXmlCook: WinXmlCook 2.1: Version 2.1 released!Xrns2XMod: Xrns2XMod 1.1: some source code optimization在线Office控件 Online Offical Control: SPOffice2.0Release: 该版本在MS Office2003/2007,WPS2009,WPS2010下测试通过Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrBlogEngine.NETPHPExcelMicrosoft Biology FoundationwhiteWindows Azure Command-line Tools for PHP DevelopersStyleCopShake - C# Make

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

  • CodePlex Daily Summary for Tuesday, March 30, 2010

    CodePlex Daily Summary for Tuesday, March 30, 2010New ProjectsCloudMail: Want to send email from Azure? Cloud Mail is designed to provide a small, effective and reliable solution for sending email from the Azure platfor...CommunityServer Extensions: Here you can find some CommunityServer extensions and bug fixes. The main goal is to provide you with the ability to correct some common problems...ContactSync: ContactSync is a set of .NET libraries, UI controls and applications for managing and synchronizing contact information. It includes managed wrapp...Dng portal: DNG Portal base on asp.net MVCDotNetNuke Referral Tracker: The Referral Tracker module allows you to save URL variables, the referring page, and the previous page into a session variable or cookie. Then, th...Foursquare for Windows Phone 7: Foursquare for Windows Phone 7.GEGetDocConfig: SharePoint utility to list information concerning document libraries in one or more sites. Displays Size, Validity, Folder, Parent, Author, Minor a...Google Maps API for .NET: Fast and lightweight client libraries for Google Maps API.kbcchina: kbc chinaLoad Test User Mock Toolkits: 用途 This project is a framework mocking the user actvities with VSTS Load Test tool to faster the test script development. 此项目包括一套模拟用户行为的通用框架,可以简化...Resonance: Resonance is a system to train neural networks, it allows to automate the train of a neural network, distributing the calculation on multiple machi...SharePoint Company Directory / Active Directory Self Service System: This is a very simple system which was designed for a Bank to allow users to update their contact information within SharePoint . Then this info ca...SmartShelf: Manage files and folders on Windows and Windows Live.sysFix: sysFix is a tool for system administrators to easily manage and fix common system errors.xnaWebcam: Webcam usage in XNA GameStudio 3.1New ReleasesAll-In-One Code Framework: All-In-One Code Framework 2010-03-29: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Samples for Azure Name Des...ARSoft.Tools.Net - C# DNS and SPF Library: 1.3.0: Added support for creating own dns server implementations. Added full IPv6 support to dns client Some performance optimizations to dns clientArtefact Animator: Artefact Animator - Silverlight 3 and WPF .Net 3.5: Artefact Animator Version 2.0.4.1 Silverlight 3 ArtefactSL.dll for both Debug and Release. WPF 3.5 Artefact.dll for both Debug and Release.BatterySaver: Version 0.4: Added support for a system tray icon for controlling the application and switching profiles (Issue)BizTalk Server 2006 Orchestration Profiler: Profiler v1.2: This is a point release of the profiler and has been updatd to work on 64 bit systems. No other new functionality is available. To use this ensure...CloudMail: CloudMail_0.5_beta: Initial public release. For documentation see http://cloudmail.codeplex.com/documentation.CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 0.08.44: See Source Code tab for recent change history.Dawf: Dual Audio Workflow: Beta 2: Fix little bugs and improve usablity by changing the way it finds the good audio.DotNetNuke Referral Tracker: 2.0.1: First releaseFoursquare for Windows Phone 7: Foursquare 2010.03.29.02: Foursquare 2010.03.29.02GEGetDocConfig: GEGETDOCCONFIG.ZIP: Installation: Simply download the zip file and extract the executable into its own directory on the SharePoint front end server Note: There will b...GKO Libraries: GKO Libraries 0.2 Beta: Added: Binary search Unmanaged wrappers, interop and pinvoke functions and structures Windows service wrapper Video mode helpers and more.....Google Maps API for .NET: GoogleMapsForNET v0.9 alpha release: First version, contains Core library featuring: Geocoding API Elevation API Static Maps APIGoogle Maps API for .NET: GoogleMapsForNET v0.9.1 alpha release: Fixed dependencies issues; added NUnit binaries and updated Newtonsoft Json library.Google Maps API for .NET: GoogleMapsForNET v0.9.2a alpha release: Recommended update.Code clean-up; did refactoring and major interface changes in Static Maps because it wasn't aligned to the 'simplest and least r...Home Access Plus+: v3.2.0.0: v3.2.0.0 Release Change Log: More AJAX to reduce page refreshes (Deleting, New Folder, Rename moved from browser popups) Only 3 Browser Popups (1...Html to OpenXml: HtmlToOpenXml 1.1: The dll library to include in your project. The dll is signed for GAC support. Compiled with .Net 3.5, Dependencies on System.Drawing.dll and Docu...Latent Semantic Analysis: Latest sources: Just the latest sources. Just click the changeset. Please note that in order to compile this code you need to download some additional code. You ...Load Test User Mock Toolkits: Load Test User Mock Toolkits Help Doc: Samples and The framework introduction. 包括框架介绍和典型示例Load Test User Mock Toolkits: Open.LoadTest.User.Mock.Toolkits 1.0 alpha: 此版本为非正式版本,未对性能方面进行优化。而且框架正在重构调整中。Mobile Broadband Logging Monitor: Mobile Broadband Logging Monitor 1.2.4: This edition supports: Newer and older editions of Birdstep Technology's EasyConnect HUAWEI Mobile Partner MWConn User defined location for s...Nito.KitchenSink: Version 3: Added Encoding.GetString(Stream, bool) for converting an entire stream into a string. Changed Stream.CopyTo to allow the stream to be closed/abor...Numina Application Framework: Numina.Framework Core 49088: Fixed Bug with Headers introduced in rev. 48249 with change to HttpUtil class. admin/User_Pending.aspx page users weren't able to be deleted Do...OAuthLib: OAuthLib (1.6.4.0): Fix for 6390 Make it possible to configure time out value.Quack Quack Says the Duck: Quack Quack Says The Duck 1.1.0.0: This new release pushes some work onto a background thread clearing issues with multiple screen clicks while the UI was blocking.Rapidshare Episode Downloader: RED v0.8.4: - Added Edit feature - Moved season & episode int to string into a separate function - Fixed some more minor issues - Added 'Previous' feature - F...RoTwee: RoTwee (8.1.3.0): Update OAuthLib to 1.6.4.0SharePoint Company Directory / Active Directory Self Service System: SharePoint Company Directory with AD Import: This is a very simple system which was designed for a Bank to allow users to update their contact information within SharePoint . Then this info ca...Simply Classified: v1.00.12: Comsite Simply Classified v1.00.12 - STABLE - Tested against DotNetNuke v4.9.5 and v5.2.x Bug Fixes/Enhancements: BUGFIX: Resolved issues with 1...sPATCH: sPatch v0.9: Completely Recoded with wxWidgetsFollowing Content is different to .NET Patcher no requirement for .NET Framework Manual patch was removed to av...SSAS Profiler Trace Scheduler: SSAS Profiler Trace Scheduler: AS Profiler Scheduler is a tool that will enable Scheduling of SQL AS Tracing using predefined Profiler Templates. For tracking different issues th...sysFix: sysfix build v5: A stable beta release, please refer to home page for further details.VOB2MKV: vob2mkv-1.0.4: This is a feature update of the VOB2MKV utility. The command-line parsing in the VOB2MKV application has been greatly improved. You can now get f...xnaWebcam: xnaWebcam 0.1: xnaWebcam 0.1 Program Version 0.1: -Show Webcam Device -Draw.String WebcamTexture.VideoDevice.Name.ToString() Instructions: 1. Plug-in your Webca...xnaWebcam: xnaWebcam 0.2: xnaWebcam 0.2 Version 0.2: -setResolution -Keys.Escape: this.Exit() << Exit the Game/Application. --- Version 0.1: -Show Webcam Device -Draw.Strin...xnaWebcam: xnaWebcam 0.21: xnaWebcam 0.2 Version 0.21: -Fix: Don't quit game/application after closing mainGameWindow -Fix: Text Position; Window.X, Window.Y --- Version 0.2...Xploit Game Engine: Xploit_1_1 Release: Added Features Multiple Mesh instancing.Xploit Game Engine: Xploit_1_1 Source Code: Updates Create multiple instances of the same Meshe using XModelMesh and XSkinnedMesh.Yakiimo3D: DX11 DirectCompute Buddhabrot Source and Binary: DX11 DirectCompute Buddhabrot/Nebulabrot source and binary.Most Popular ProjectsRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)LiveUpload to FacebookASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETLINQ to TwitterManaged Extensibility FrameworkMicrosoft Biology FoundationFarseer Physics EngineN2 CMSNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise Library

    Read the article

  • Auto-Suggest via &lsquo;Trie&rsquo; (Pre-fix Tree)

    - by Strenium
    Auto-Suggest (Auto-Complete) “thing” has been around for a few years. Here’s my little snippet on the subject. For one of my projects, I had to deal with a non-trivial set of items to be pulled via auto-suggest used by multiple concurrent users. Simple, dumb iteration through a list in local cache or back-end access didn’t quite cut it. Enter a nifty little structure, perfectly suited for storing and matching verbal data: “Trie” (http://tinyurl.com/db56g) also known as a Pre-fix Tree: “Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Values are normally not associated with every node, only with leaves and some inner nodes that correspond to keys of interest.” This is a very scalable, performing structure. Though, as usual, something ‘fast’ comes at a cost of ‘size’; fortunately RAM is more plentiful today so I can live with that. I won’t bore you with the detailed algorithmic performance here - Google can do a better job of such. So, here’s C# implementation of all this. Let’s start with individual node: Trie Node /// <summary> /// Contains datum of a single trie node. /// </summary> public class AutoSuggestTrieNode {     public char Value { get; set; }       /// <summary>     /// Gets a value indicating whether this instance is leaf node.     /// </summary>     /// <value>     ///     <c>true</c> if this instance is leaf node; otherwise, a prefix node <c>false</c>.     /// </value>     public bool IsLeafNode { get; private set; }       public List<AutoSuggestTrieNode> DescendantNodes { get; private set; }         /// <summary>     /// Initializes a new instance of the <see cref="AutoSuggestTrieNode"/> class.     /// </summary>     /// <param name="value">The phonetic value.</param>     /// <param name="isLeafNode">if set to <c>true</c> [is leaf node].</param>     public AutoSuggestTrieNode(char value = ' ', bool isLeafNode = false)     {         Value = value;         IsLeafNode = isLeafNode;           DescendantNodes = new List<AutoSuggestTrieNode>();     }       /// <summary>     /// Gets the descendants of the pre-fix node, if any.     /// </summary>     /// <param name="descendantValue">The descendant value.</param>     /// <returns></returns>     public AutoSuggestTrieNode GetDescendant(char descendantValue)     {         return DescendantNodes.FirstOrDefault(descendant => descendant.Value == descendantValue);     } }   Quite self-explanatory, imho. A node is either a “Pre-fix” or a “Leaf” node. “Leaf” contains the full “word”, while the “Pre-fix” nodes act as indices used for matching the results.   Ok, now the Trie: Trie Structure /// <summary> /// Contains structure and functionality of an AutoSuggest Trie (Pre-fix Tree) /// </summary> public class AutoSuggestTrie {     private readonly AutoSuggestTrieNode _root = new AutoSuggestTrieNode();       /// <summary>     /// Adds the word to the trie by breaking it up to pre-fix nodes + leaf node.     /// </summary>     /// <param name="word">Phonetic value.</param>     public void AddWord(string word)     {         var currentNode = _root;         word = word.Trim().ToLower();           for (int i = 0; i < word.Length; i++)         {             var child = currentNode.GetDescendant(word[i]);               if (child == null) /* this character hasn't yet been indexed in the trie */             {                 var newNode = new AutoSuggestTrieNode(word[i], word.Count() - 1 == i);                   currentNode.DescendantNodes.Add(newNode);                 currentNode = newNode;             }             else                 currentNode = child; /* this character is already indexed, move down the trie */         }     }         /// <summary>     /// Gets the suggested matches.     /// </summary>     /// <param name="word">The phonetic search value.</param>     /// <returns></returns>     public List<string> GetSuggestedMatches(string word)     {         var currentNode = _root;         word = word.Trim().ToLower();           var indexedNodesValues = new StringBuilder();         var resultBag = new ConcurrentBag<string>();           for (int i = 0; i < word.Trim().Length; i++)  /* traverse the trie collecting closest indexed parent (parent can't be leaf, obviously) */         {             var child = currentNode.GetDescendant(word[i]);               if (child == null || word.Count() - 1 == i)                 break; /* done looking, the rest of the characters aren't indexed in the trie */               indexedNodesValues.Append(word[i]);             currentNode = child;         }           Action<AutoSuggestTrieNode, string> collectAllMatches = null;         collectAllMatches = (node, aggregatedValue) => /* traverse the trie collecting matching leafNodes (i.e. "full words") */             {                 if (node.IsLeafNode) /* full word */                     resultBag.Add(aggregatedValue); /* thread-safe write */                   Parallel.ForEach(node.DescendantNodes, descendandNode => /* asynchronous recursive traversal */                 {                     collectAllMatches(descendandNode, String.Format("{0}{1}", aggregatedValue, descendandNode.Value));                 });             };           collectAllMatches(currentNode, indexedNodesValues.ToString());           return resultBag.OrderBy(o => o).ToList();     }         /// <summary>     /// Gets the total words (leafs) in the trie. Recursive traversal.     /// </summary>     public int TotalWords     {         get         {             int runningCount = 0;               Action<AutoSuggestTrieNode> traverseAllDecendants = null;             traverseAllDecendants = n => { runningCount += n.DescendantNodes.Count(o => o.IsLeafNode); n.DescendantNodes.ForEach(traverseAllDecendants); };             traverseAllDecendants(this._root);               return runningCount;         }     } }   Matching operations and Inserts involve traversing the nodes before the right “spot” is found. Inserts need be synchronous since ordering of data matters here. However, matching can be done in parallel traversal using recursion (line 64). Here’s sample usage:   [TestMethod] public void AutoSuggestTest() {     var autoSuggestCache = new AutoSuggestTrie();       var testInput = @"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer nec odio. Praesent libero.                 Sed cursus ante dapibus diam. Sed nisi. Nulla quis sem at nibh elementum imperdiet. Duis sagittis ipsum. Praesent mauris.                 Fusce nec tellus sed augue semper porta. Mauris massa. Vestibulum lacinia arcu eget nulla. Class aptent taciti sociosqu ad                 litora torquent per conubia nostra, per inceptos himenaeos. Curabitur sodales ligula in libero. Sed dignissim lacinia nunc.                 Curabitur tortor. Pellentesque nibh. Aenean quam. In scelerisque sem at dolor. Maecenas mattis. Sed convallis tristique sem.                 Proin ut ligula vel nunc egestas porttitor. Morbi lectus risus, iaculis vel, suscipit quis, luctus non, massa. Fusce ac                 turpis quis ligula lacinia aliquet. Mauris ipsum. Nulla metus metus, ullamcorper vel, tincidunt sed, euismod in, nibh. Quisque                 volutpat condimentum velit. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nam                 nec ante. Sed lacinia, urna non tincidunt mattis, tortor neque adipiscing diam, a cursus ipsum ante quis turpis. Nulla                 facilisi. Ut fringilla. Suspendisse potenti. Nunc feugiat mi a tellus consequat imperdiet. Vestibulum sapien. Proin quam. Etiam                 ultrices. Suspendisse in justo eu magna luctus suscipit. Sed lectus. Integer euismod lacus luctus magna. Quisque cursus, metus                 vitae pharetra auctor, sem massa mattis sem, at interdum magna augue eget diam. Vestibulum ante ipsum primis in faucibus orci                 luctus et ultrices posuere cubilia Curae; Morbi lacinia molestie dui. Praesent blandit dolor. Sed non quam. In vel mi sit amet                 augue congue elementum. Morbi in ipsum sit amet pede facilisis laoreet. Donec lacus nunc, viverra nec.";       testInput.Split(' ').ToList().ForEach(word => autoSuggestCache.AddWord(word));       var testMatches = autoSuggestCache.GetSuggestedMatches("le"); }   ..and the result: That’s it!

    Read the article

  • CodePlex Daily Summary for Sunday, February 20, 2011

    CodePlex Daily Summary for Sunday, February 20, 2011Popular ReleasesView Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator (1.0.119.18): Initial releaseWindows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Advanced Explorer for Wp7: Advanced Explorer for Wp7 Version 1.4 Test8: Added option to run under Lockscreen. Fixed a bug when you open a pdf/mobi file without starting adobe reader/amazon kindle first boost loading time for folders added \Windows directory (all devices) you can now interact with the filesystem while it is loading!Game Files Open - Map Editor: Game Files Open - Map Editor Beta 2 v1.0.0.0: The 2° beta release of the Map Editor, we have fixed a big bug of the files regen.Document.Editor: 2011.6: Whats new for Document.Editor 2011.6: New Left to Right and Left to Right support New Indent more/less support Improved Home tab Improved Tooltips/shortcut keys Minor Bug Fix's, improvements and speed upsCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...??????????: All-In-One Code Framework ??? 2011-02-18: ?????All-In-One Code Framework?2011??????????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????,?????AzureBingMaps??????,??Azure,WCF, Silverlight, Window Phone????????,????????????????????????。 ???: Windows Azure SQL Azure Windows Azure AppFabric Windows Live Messenger Connect Bing Maps ?????: ??????HTML??? ??Windows PC?Mac?Silverlight??? ??Windows Phone?Silverlight??? ?????:http://blog.csdn.net/sjb5201/archive/2011...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 29 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 29 (beta)New: Added VsTortoise Solution Explorer integration for Web Project Folder, Web Folder and Web Item. Fix: TortoiseProc was called with invalid parameters, when using TSVN 1.4.x or older #7338 (thanks psifive) Fix: Add-in does not work, when "TortoiseSVN/bin" is not added to PATH environment variable #7357 Fix: Missing error message when ...Sense/Net CMS - Enterprise Content Management: SenseNet 6.0.3 Community Edition: Sense/Net 6.0.3 Community Edition We are happy to introduce you the latest version of Sense/Net with integrated ECM Workflow capabilities! In the past weeks we have been working hard to migrate the product to .Net 4 and include a workflow framework in Sense/Net built upon Windows Workflow Foundation 4. This brand new feature enables developers to define and develop workflows, and supports users when building and setting up complicated business processes involving content creation and response...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Terminals: Version 2 - RC1: The "Clean Install" will overwrite your log4net configuration (if you have one). If you run in a Portable Environment, you can use the "Clean Install" and target your portable folder. Tested and it works fine. Changes for this release: Re-worked on the Toolstip settings are done, just to avoid the vs.net clash with auto-generating files for .settings files. renamed it to .settings.config Packged both log4net and ToolStripSettings files into the installer Upgraded the version inform...SQL Server Process Info Log: Release: releaseAllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.Jint - Javascript Interpreter for .NET: Jint - 0.9.0: New CLR interoperability features Many bugfixesBuild Version Increment Add-In Visual Studio: Build Version Increment v2.4.11046.2045: v2.4.11046.2045 Fixes and/or Improvements:Major: Added complete support for VC projects including .vcxproj & .vcproj. All padding issues fixed. A project's assembly versions are only changed if the project has been modified. Minor Order of versioning style values is now according to their respective positions in the attributes i.e. Major, Minor, Build, Revision. Fixed issue with global variable storage with some projects. Fixed issue where if a project item's file does not exist, a ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.1: Coding4Fun.Phone.Toolkit v1.1 release. Bug fixes and minor feature requests addedNew Projects.NET Core Audio APIs: This project provides a lightweight set of .NET wrappers around each section of the Windows Core Audio APIs.Akwen: Akwen aspirantbigserve: made in load sof languages a serve ralternative to iis and apacheCathedral game framework: (Will eventually be) A framework for creating 2d region-based games in Silverlight.Gmail Like File Uploader In .NET: Many times I wounder, How does gmail uploads files and monitor the upload status. I suppose many of you also have gone through the same experience. Most of you knows that the Choose File button on the Gmail Page is actually a flash button but how about the progressbar? AJAX rightinicomInterface: ????ini???????COM??iOnlineExamCenter - Trung Tâm Thi Tr?c Tuy?n: Tìm hi?u các phuong pháp thi tr?c nghi?m . Áp d?ng xây d?ng trung tâm thi tr?c tuy?n. ?ng d?ng du?c xây d?ng d?a trên n?n t?ng - .NET Framework v4.0 - ASP.NET MVC v3 - Entity Framework v4 - Silverlight v4 - jQueryLee Utility Library: Lee Utility LibrarySocialCore: Social Core is a framework for developing Social applications using a distributed infrastructure. The focus of Social Core is on collaboration, security and transparency. Privacy is not dead, it just is not implemented correctly.TestAmazonCloud: tacTftp.Net: This projects implements the TFTP (Trivial File Transfer) protocol for .NET in an easy-to-use library. It allows you to integrate TFTP client and server functionality into your project. It is s table, unit-tested and comes with a sample TFTP client and server.Using the Repository Pattern and DI to switch between SQL and XML: This is an example application, showing how Dependency Injection and the Repository can be used to allow interchangeable persistence between XML and SQLWebForms Razor converter - WinForms: Telerik developed a command line "WebForms to Razor (CSHTML) conversion tool" and uploaded in git -https://github.com/telerik/razor-converter This project uses the same Telerik API, but the client is the GUI/WinForms and allow developer to convert multiple files in a one go.XNA and Dependency Injection: Test code sample for XNA and Dependency Injection.XNA and IoC Container: Test code sample for XNA and IoC Container.XNA and Unit Testing: Test code sample for XNA and Unit Testing.Xplore World: Xplore World es un navegador web avanzado disponible para Windows Xp/Vista/7 y actualmente solo está disponible en español. Con este navegador puedes hacer cosas maravillosas como ver imágenes de cualquier web a tu estilo, administrar los usuarios, modificar tus marcadores, etc.ZO (ZeroOne): ZeroOne is the editor for programmers who think in binary.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >