Search Results

Search found 23044 results on 922 pages for 'oracle solaris 11'.

Page 533/922 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • Asterisk is hacking itself [duplicate]

    - by Shirker
    This question already has an answer here: How do I deal with a compromised server? 11 answers I've got some strange logs on my asterisk (and there a lot of extensions were tried): chan_sip.c: Failed to authenticate device 6006<sip:[email protected]>;tag=f106f3fe but IP XX.XX.XX.39 is its OWN IP! cat /etc/asterisk/* | grep 6006 returns nothing. asterisk -rv Asterisk 11.4.0 How its possible, that its hacks itself? And how could I trace, where it comes from?

    Read the article

  • The project estimates the installation of external and internal surveillance. [closed]

    - by Zhasulan Berdybekov
    The project estimates the installation of external and internal surveillance. Here are our objects: 1 - Number of cameras 2 - These are objects 3 - setting this distance to the Situation Centre 11 - New Alphabet - 1,5 km 11 - New Alphabet - 1 km 19 - New Alphabet - 800 m 19 - New Alphabet - 1 km 35 - The building - 200 m 35 - The building - 100 m 18 - The building - 100 m 22 - Outside videonalyudenie - 50 m to 1 km Please tell how many need to DVRs, and where they put on the object or situation center How to bring information to the Situation Centre. What cables needed. Your advice and comment. Thank you for your efforts!

    Read the article

  • internet without dns tennis play [closed]

    - by Curious
    Why do we make DNS requests separately when an ISP could also be handling the DNS request along with HTTP data simultaneously. So rather than: Ask opendns what yahoos address is. Opendns returns: 66.55.44.11 Hey, Verizon. Send/Request data from 66.55.44.11. Why wouldn't the protocol just request data from "yahoo.com" and verizon interprets the yahoo.com as a split DNS request. This would lower latency for sure as it cuts out the time required for the dns server to call back the IP to then be sent AGAIN when it could just be handling the entire request theoretically. Couldn't this be managed via a host file change on the client side and make compatible servers?? So much like a proxy.

    Read the article

  • Is it possible to use the same MAC address for an entire subnet?

    - by Bruce
    I wish to add static entries to the ARP table of my machine so that it uses a dummy MAC 00:11:22:33:44:55 for any IP address within the subnet 10.0.0.0/8 Using arp -s 10.0.0.0/8 00:11:22:33:44:55 does not work. What can I do? PS - I know it might sound strange why anyone would want to do this but kindly bear with me here. EDIT: I am using this so that the hosts do not send a broadcast ARP message. I route the packets to the appropriate last hop router which changes the dst MAC from the fake MAC to the MAC of the dst IP address. I can get everything working except I have to manually enter the fake MAC for each subnet IP address.

    Read the article

  • count the number of times a substring is found within a date range in excel

    - by ckr
    I have a spreadsheet that contains test data. column A has the test name and column B contains the test date. I want to count the number of times that the string Rerun is found within a certain date range. For example A B test1 11/2/2012 test2 11/7/2012 test1_Rerun_1 11/10/2012 test2_Rerun_1 11/16/2012 I am doing a weekly report so want to show how many tests had to be rerun in a particular week. so in the above example: week ending 11/2/12 would return 0 (look for dates 10/26/12 and <=11/2/12 with substring "Rerun") week ending 11/9/12 would return 0 (look for dates 11/2/12 and <= 11/9/12 with substring "Rerun") week ending 11/16/12 would return 2 (look for dates 11/9/12 and <=11/16/12 with substring "Rerun")

    Read the article

  • Crunchbang asks for URIs to get build dependencies for php5 [closed]

    - by Koen027
    I'm trying to install PHP5, by compiling it myself. I'm doing this on Crunchbang Linux, version 11. Specifically, the version using the 3.2 kernel. Crunchbang 11 is based on Debian. This is a 64-bit Virtual machine, running on a 64-bit Win7 Professional. Using Virtualbox 4.2 This is my sources.list file: http://pastebin.com/rQJNwX7c When I try to do sudo apt-get build-dep php5, I get the following output: Reading package lists... Done Building dependency tree Reading state information... Done E: You must put some 'source' URIs in your sources.list I'm not sure what's going on here. Can anyone point out what's missing in my setup? Also, since I'm under 300 reputation on serverfault, I can't add the "crunchbang" tag.

    Read the article

  • Exception Errno::EPIPE in Passenger RequestHandler (Broken pipe)

    - by Millisami
    Hi, Upgraded to Rails 2.3.2 and Passenger 2.2.4 on Ubuntu hardy slice at slicehost with Apache2 I'm getting this same above discussed error in my Apache error.log of system /var/logs/apache2/ [ pid=4249 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:47:32.752 ]: No data received from the backend application (process 4383) within 45000 msec. Either the backend application is frozen, or your TimeOut value of 45 seconds is too low. Please check whether your application is frozen, or increase the value of the TimeOut configuration directive. *** Exception Errno::EPIPE in Passenger RequestHandler (Broken pipe) (process 4391): from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/rack/request_handler.rb:93:in `write' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/rack/request_handler.rb:93:in `process_request' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_request_handler.rb:206:in `main_loop' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:376:in `start_request_handler' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:334:in `handle_spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/utils.rb:182:in `safe_fork' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:332:in `handle_spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `__send__' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `main_loop' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:195:in `start_synchronously' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:162:in `start' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:213:in `start' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:261:in `spawn_rails_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server_collection.rb:126:in `lookup_or_add' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:255:in `spawn_rails_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server_collection.rb:80:in `synchronize' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server_collection.rb:79:in `synchronize' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:254:in `spawn_rails_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:153:in `spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:286:in `handle_spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `__send__' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `main_loop' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:195:in `start_synchronously' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/bin/passenger-spawn- server:61 *** Exception Errno::EPIPE in Passenger RequestHandler (Broken pipe) (process 4383): and these too. pid=4362 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:55:19.251 ]: No data received from the backend application (process 4383) within 45000 msec. Either the backend application is frozen, or your TimeOut value of 45 seconds is too low. Please check whether your application is frozen, or increase the value of the TimeOut configuration directive. [ pid=4298 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:55:19.255 ]: No data received from the backend application (process 4252) within 45000 msec. Either the backend application is frozen, or your TimeOut value of 45 seconds is too low. Please check whether your application is frozen, or increase the value of the TimeOut configuration directive. [Sat Jul 04 11:55:19 2009] [error] [client 86.96.226.13] Premature end of script headers: 41, referer: http://domain.com/ [ pid=4373 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:55:19.559 ]: Its getting me mad and on the browser, sometimes its show and when refreshed, Application Error 500 shows up in frequent basis. any directions??

    Read the article

  • Populate google.visualization.DataTable for a AnnotatedTimeLine using JSON

    - by Lucifer
    Hi I have a HttpHandler which returns some JSON in (i think) the correct format for a google.visualization.DataTable, but the AnnotatedTimeLine fails to work? This is the JSON returned by the Handler: {cols: [{id: 'DATE', label: 'Date', type: 'date'}, {id: 'KEYWORD51', label: 'vw cheltenham', type: 'number'}, {id: 'KEYWORD52', label: 'volkswagen cheltenham', type: 'number'}, {id: 'KEYWORD61', label: 'vw dealer cheltenham', type: 'number'}], rows: [{c: [{v: new Date(2010, 3, 13)}, {v: 20}, {v: 1}, {v: 2}]}, {c: [{v: new Date(2010, 3, 14)}, {v: 19}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 15)}, {v: 19}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 16)}, {v: 18}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 17)}, {v: 17}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 18)}, {v: 17}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 19)}, {v: 12}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 20)}, {v: 13}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 21)}, {v: 11}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 22)}, {v: 10}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 23)}, {v: 10}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 24)}, {v: 8}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 25)}, {v: 6}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 26)}, {v: 6}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 27)}, {v: 5}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 28)}, {v: 4}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 29)}, {v: 4}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 30)}, {v: 2}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 1)}, {v: 2}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 2)}, {v: 1}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 3)}, {v: 2}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 4)}, {v: 0}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 5)}, {v: 0}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 6)}, {v: 0}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 4, 7)}, {v: 0}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 4, 8)}, {v: 0}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 4, 9)}, {v: 0}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 4, 10)}, {v: 0}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 4, 11)}, {v: 0}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 12)}, {v: 2}, {v: 1}, {v: 1}]}]} This is the Javascript, I used JQuery to get the JSON, have also tried $.getJSON() google.load('visualization', '1', { 'packages': ['annotatedtimeline'] }); google.setOnLoadCallback(loadGraph); function loadGraph() { $.get("/GraphDataHandler.axd", function(response) { drawGraph(response); }); } function drawGraph(response) { var visualization = new google.visualization.AnnotatedTimeLine(document.getElementById('chart_div')); var data = new google.visualization.DataTable(response, 0.6); visualization.draw(data, { title: 'Rankings', titleX: 'Date', titleY: 'Position', displayAnnotations: false, allowRedraw: true }); } But, if I write the same JSON to the page like below it works fine!? <script type="text/javascript"> //<![CDATA[ var gData = {cols: [{id: 'DATE', label: 'Date', type: 'date'}, {id: 'KEYWORD51', label: 'vw cheltenham', type: 'number'}], rows: [{c: [{v: new Date(2010, 3, 13)}, {v: 20}]}, {c: [{v: new Date(2010, 3, 14)}, {v: 19}]}, {c: [{v: new Date(2010, 3, 15)}, {v: 19}]}, {c: [{v: new Date(2010, 3, 16)}, {v: 18}]}, {c: [{v: new Date(2010, 3, 17)}, {v: 17}]}, {c: [{v: new Date(2010, 3, 18)}, {v: 17}]}, {c: [{v: new Date(2010, 3, 19)}, {v: 12}]}, {c: [{v: new Date(2010, 3, 20)}, {v: 13}]}, {c: [{v: new Date(2010, 3, 21)}, {v: 11}]}, {c: [{v: new Date(2010, 3, 22)}, {v: 10}]}, {c: [{v: new Date(2010, 3, 23)}, {v: 10}]}, {c: [{v: new Date(2010, 3, 24)}, {v: 8}]}, {c: [{v: new Date(2010, 3, 25)}, {v: 6}]}, {c: [{v: new Date(2010, 3, 26)}, {v: 6}]}, {c: [{v: new Date(2010, 3, 27)}, {v: 5}]}, {c: [{v: new Date(2010, 3, 28)}, {v: 4}]}, {c: [{v: new Date(2010, 3, 29)}, {v: 4}]}, {c: [{v: new Date(2010, 3, 30)}, {v: 2}]}, {c: [{v: new Date(2010, 4, 1)}, {v: 2}]}, {c: [{v: new Date(2010, 4, 2)}, {v: 1}]}, {c: [{v: new Date(2010, 4, 3)}, {v: 2}]}, {c: [{v: new Date(2010, 4, 4)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 5)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 6)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 7)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 8)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 9)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 10)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 11)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 12)}, {v: 2}]}]}; //]]> </script> Please advise how I can get it to work correctly using a the JSON calls? Thanks

    Read the article

  • Having trouble getting cucumber 6.3 to run on rails 2.3.4

    - by Yak
    Hi, I am trying to to get cucumber to run with no luck. Here is the error I am seeing: cucumber features Using the default profile... no such file to load -- test/ (MissingSourceFile) /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in require' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:158:in require' /Users/yakovrabinovich/Starstreet/starstreet/vendor/gems/cucumber-0.6.3/bin/../lib/cucumber/rails/world.rb:11 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in require' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:158:in require' /Library/Ruby/Gems/1.8/gems/cucumber-rails-0.3.0/lib/cucumber/rails/rspec.rb:1 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in require' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:158:in require' /Users/yakovrabinovich/Starstreet/starstreet/features/support/env.rb:11 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in require' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/rb_support/rb_language.rb:124:in load_code_file' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:85:in load_code_file' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:77:in load_code_files' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in each' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in load_code_files' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:48:in execute!' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:20:in execute' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/cucumber:8 /usr/bin/cucumber:19:in `load' /usr/bin/cucumber:19 Here are my gems: Yakov-Rabinovichs-MacBook:1.8 yakovrabinovich$ gem list * LOCAL GEMS * aasm (2.1.3) acl9 (0.11.0) actionmailer (2.3.4, 2.2.2, 1.3.6) actionpack (2.3.4, 2.2.2, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.4, 2.2.2, 1.15.6) activeresource (2.3.4, 2.2.2) activesupport (2.3.4, 2.2.2, 1.4.4) acts_as_ferret (0.4.3) authlogic (2.1.3) bgetting-hominid (1.2.0) builder (2.1.2) capistrano (2.5.2) capistrano-ext (1.2.1) cgi_multipart_eof_fix (2.5.0) chronic (0.2.3) columnize (0.3.1) configatron (2.5.1) cucumber (0.6.3) cucumber-rails (0.3.0) daemons (1.0.10) database_cleaner (0.5.0) diff-lcs (1.1.2) dnssd (0.6.0) factory_girl (1.2.3) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.6) gem_plugin (0.2.3) gemcutter (0.4.1) highline (1.5.0) hoe (2.5.0) hominid (2.1.0) hpricot (0.6.164) json (1.2.0) json_pure (1.2.0) libxml-ruby (1.1.2) linecache (0.43) mocha (0.9.8) mongrel (1.1.5) needle (1.3.0) net-scp (1.0.1) net-sftp (2.0.1, 1.1.1) net-ssh (2.0.16, 2.0.4, 1.1.4) net-ssh-gateway (1.0.0) nokogiri (1.4.1) oauth (0.3.6) pg (0.8.0) polyglot (0.3.0) rack (1.0.1) rack-test (0.5.3) rails (2.3.4, 2.2.2, 1.2.6) rake (0.8.7, 0.8.3) RedCloth (4.1.1) rspec (1.3.0) rspec-rails (1.3.2) ruby-debug (0.10.3) ruby-debug-base (0.10.3) ruby-hmac (0.4.0) ruby-openid (2.1.2) ruby-yadis (0.3.4) rubyforge (2.0.3) rubygems-update (1.3.5) rubynode (0.1.5) sqlite3-ruby (1.2.4) term-ansicolor (1.0.4) termios (0.9.4) test-unit (1.2.3) thoughtbot-factory_girl (1.2.2) thoughtbot-shoulda (2.10.2) treetop (1.4.4) whenever (0.4.1) will_paginate (2.3.11) xmpp4r (0.4) yamler (0.1.0) Any help would be greatly appreciated!

    Read the article

  • tcpdf table header in each page

    - by DragoN
    i am using tcpdf to create pdf files with rows of table in the first page of pdf i display some info like : List of table .... and header of table after that i display the rows in table if there much rows it contiunes in the next page without the info [List of table... and header of table] i want to display the header of table only in the next pages here is my code $pdf->SetFont('aefurat', '', 15); $pdf->AddPage('P', 'A4'); $pdf->SetFontSize(17); $pdf->Cell(0, 13, 'List of the Byan Table,'C'); $pdf->SetFont('dejavusans', '', 14); $htmlpersian = 'In / Out List'; $pdf->WriteHTML($htmlpersian, true, 0, true, 0); $pdf->setRTL(false); $pdf->SetFontSize(11); $pdf->setRTL(true); Connect(); $resultsc = mysql_query("SELECT * FROM byan Order By Date "); while($r = mysql_fetch_array($resultsc)) { $printresult .= ' <tr> <td ><center>'.$r['Date'].'</center></td> <td ><center>'.$r['In'].'</center></td> <td ><center>'.$r['Out'].'</center></td> <td ><center>'.$r['Balance'].'</center></td> <td ><center>'.$r['Info'].'</center></td> <td ><center>'.$r['Number'].'</center></td> </tr>'; } $tbl = ' <table cellspacing="0" cellpadding="1" border="1" align="center"> <tr> <td style="width: 13%; background-color:black; color:white;"><center>Date</center></td> <td style="width: 11%; background-color:black; color:white;"><center>In</center></td> <td style="width: 11%; background-color:black; color:white;"><center>Out</center></td> <td style="width: 12%; background-color:black; color:white;"><center>Balance</center></td> <td style="width: 45%; background-color:black; color:white;"><center>Info</center></td> <td style="width: 11%; background-color:black; color:white;"><center>Number</center></td> </tr> '.$printresult.' </table> '; $pdf->writeHTML($tbl, true, false, true, false, ''); the output is First Page: List of the Byan Table In / Out List Table Header rows Second Page: Rows Third Page: Rows i want the output to be like that First Page: List of the Byan Table In / Out List Table Header Rows Second Page: Table Header Rows Third Page: Table Header Rows

    Read the article

  • Objective-C: Scope problems cellForRowAtIndexPath

    - by Mr. McPepperNuts
    How would I set each individual row in cellForRowAtIndexPath to the results of an array populated by a Fetch Request? (Fetch Request made when button is pressed.) - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { // ... set up cell code here ... cell.textLabel.text = [results objectAtIndex:indexPath valueForKey:@"name"]; } warning: 'NSArray' may not respond to '-objectAtIndexPath:' Edit: - (NSArray *)SearchDatabaseForText:(NSString *)passdTextToSearchFor{ NSManagedObject *searchObj; XYZAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *managedObjectContext = appDelegate.managedObjectContext; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name contains [cd] %@", passdTextToSearchFor]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Entry" inManagedObjectContext:managedObjectContext]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [request setEntity: entity]; [request setPredicate: predicate]; NSError *error; results = [managedObjectContext executeFetchRequest:request error:&error]; // NSLog(@"results %@", results); if([results count] == 0){ NSLog(@"No results found"); searchObj = nil; self.tempString = @"No results found."; }else{ if ([[[results objectAtIndex:0] name] caseInsensitiveCompare:passdTextToSearchFor] == 0) { NSLog(@"results %@", [[results objectAtIndex:0] name]); searchObj = [results objectAtIndex:0]; }else{ NSLog(@"No results found"); self.tempString = @"No results found."; searchObj = nil; } } [tableView reloadData]; [request release]; [sortDescriptors release]; return results; } - (void)searchBarSearchButtonClicked:(UISearchBar *)searchBar{ textToSearchFor = mySearchBar.text; results = [self SearchDatabaseForText:textToSearchFor]; self.tempString = [myGlobalSearchObject valueForKey:@"name"]; NSLog(@"results count: %d", [results count]); NSLog(@"results 0: %@", [[results objectAtIndex:0] name]); NSLog(@"results 1: %@", [[results objectAtIndex:1] name]); } @end Console prints: 2010-06-10 16:11:18.581 XYZApp[10140:207] results count: 2 2010-06-10 16:11:18.581 XYZApp[10140:207] results 0: BB Bugs 2010-06-10 16:11:18.582 XYZApp[10140:207] results 1: BB Annie Program received signal: “EXC_BAD_ACCESS”. (gdb) Edit 2: BT: #0 0x95a91edb in objc_msgSend () #1 0x03b1fe20 in ?? () #2 0x0043cd2a in -[UITableViewRowData(UITableViewRowDataPrivate) _updateNumSections] () #3 0x0043ca9e in -[UITableViewRowData invalidateAllSections] () #4 0x002fc82f in -[UITableView(_UITableViewPrivate) _updateRowData] () #5 0x002f7313 in -[UITableView noteNumberOfRowsChanged] () #6 0x00301500 in -[UITableView reloadData] () #7 0x00008623 in -[SearchViewController SearchDatabaseForText:] (self=0x3d16190, _cmd=0xf02b, passdTextToSearchFor=0x3b29630) #8 0x000086ad in -[SearchViewController searchBarSearchButtonClicked:] (self=0x3d16190, _cmd=0x16492cc, searchBar=0x3d2dc50) #9 0x0047ac13 in -[UISearchBar(UISearchBarStatic) _searchFieldReturnPressed] () #10 0x0031094e in -[UIControl(Deprecated) sendAction:toTarget:forEvent:] () #11 0x00312f76 in -[UIControl(Internal) _sendActionsForEventMask:withEvent:] () #12 0x0032613b in -[UIFieldEditor webView:shouldInsertText:replacingDOMRange:givenAction:] () #13 0x01d5a72d in __invoking___ () #14 0x01d5a618 in -[NSInvocation invoke] () #15 0x0273fc0a in SendDelegateMessage () #16 0x033168bf in -[_WebSafeForwarder forwardInvocation:] () #17 0x01d7e6f4 in ___forwarding___ () #18 0x01d5a6c2 in __forwarding_prep_0___ () #19 0x03320fd4 in WebEditorClient::shouldInsertText () #20 0x0279dfed in WebCore::Editor::shouldInsertText () #21 0x027b67a5 in WebCore::Editor::insertParagraphSeparator () #22 0x0279d662 in WebCore::EventHandler::defaultTextInputEventHandler () #23 0x0276cee6 in WebCore::EventTargetNode::defaultEventHandler () #24 0x0276cb70 in WebCore::EventTargetNode::dispatchGenericEvent () #25 0x0276c611 in WebCore::EventTargetNode::dispatchEvent () #26 0x0279d327 in WebCore::EventHandler::handleTextInputEvent () #27 0x0279d229 in WebCore::Editor::insertText () #28 0x03320f4d in -[WebHTMLView(WebNSTextInputSupport) insertText:] () #29 0x0279d0b4 in -[WAKResponder tryToPerform:with:] () #30 0x03320a33 in -[WebView(WebViewEditingActions) _performResponderOperation:with:] () #31 0x03320990 in -[WebView(WebViewEditingActions) insertText:] () #32 0x00408231 in -[UIWebDocumentView insertText:] () #33 0x003ccd31 in -[UIKeyboardImpl acceptWord:firstDelete:addString:] () #34 0x003d2c8c in -[UIKeyboardImpl addInputString:fromVariantKey:] () #35 0x004d1a00 in -[UIKeyboardLayoutStar sendStringAction:forKey:] () #36 0x004d0285 in -[UIKeyboardLayoutStar handleHardwareKeyDownFromSimulator:] () #37 0x002b5bcb in -[UIApplication handleEvent:withNewEvent:] () #38 0x002b067f in -[UIApplication sendEvent:] () #39 0x002b7061 in _UIApplicationHandleEvent () #40 0x02542d59 in PurpleEventCallback () #41 0x01d55b80 in CFRunLoopRunSpecific () #42 0x01d54c48 in CFRunLoopRunInMode () #43 0x02541615 in GSEventRunModal () #44 0x025416da in GSEventRun () #45 0x002b7faf in UIApplicationMain () #46 0x00002578 in main (argc=1, argv=0xbfffef5c) at /Users/default/Documents/iPhone Projects/XYZApp/main.m:14

    Read the article

  • How to use jQuery to assign a class to only one radio button in a group when user clicks on it?

    - by xraminx
    I have the following markup. I would like to add class_A to <p class="subitem-text"> (that holds the radio button and the label) when user clicks on the <input> or <label>. If user clicks some other radio-button/label in the same group, I would like to add class_A to this radio-button's parent paragraph and remove class_A from any other paragraph that hold radio-buttons/labels in that group. Effectively, in each <li>, only one <p class="subitem-text"> should have class_A added to it. Is there a jQuery plug-in that does this? Or is there a simple trick that can do this? <ul> <li> <div class="myitem-wrapper" id="10"> <div class="myitem clearfix"> <span class="number">1</span> <div class="item-text">Some text here </div> </div> <p class="subitem-text"> <input type="radio" name="10" value="15" id="99"> <label for="99">First subitem </label> </p> <p class="subitem-text"> <input type="radio" name="10" value="77" id="21"> <label for="21">Second subitem</label> </p> </div> </li> <li> <div class="myitem-wrapper" id="11"> <div class="myitem clearfix"> <span class="number">2</span> <div class="item-text">Some other text here ... </div> </div> <p class="subitem-text"> <input type="radio" name="11" value="32" id="201"> <label for="201">First subitem ... </label> </p> <p class="subitem-text"> <input type="radio" name="11" value="68" id="205"> <label for="205">Second subitem ...</label> </p> <p class="subitem-text"> <input type="radio" name="11" value="160" id="206"> <label for="206">Third subitem ...</label> </p> </div> </li>

    Read the article

  • MySQL and INT auto_increment fields

    - by PHPguy
    Hello folks, I'm developing in LAMP (Linux+Apache+MySQL+PHP) since I remember myself. But one question was bugging me for years now. I hope you can help me to find an answer and point me into the right direction. Here is my challenge: Say, we are creating a community website, where we allow our users to register. The MySQL table where we store all users would look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, from this snippet you can see that we have a unique and automatically incrementing for every new user 'uid' field. As on every good and loyal community website we need to provide users with possibility to completely delete their profile if they want to cancel their participation in our community. Here comes my problem. Let's say we have 3 registered users: Alice (uid = 1), Bob (uid = 2) and Chris (uid = 3). Now Bob want to delete his profile and stop using our community. If we delete Bob's profile from the 'users' table then his missing 'uid' will create a gap which will be never filled again. In my opinion it's a huge waste of uid's. I see 3 possible solutions here: 1) Increase the capacity of the 'uid' field in our table from SMALLINT (int(2)) to, for example, BIGINT (int(8)) and ignore the fact that some of the uid's will be wasted. 2) introduce the new field 'is_deleted', which will be used to mark deleted profiles (but keep them in the table, instead of deleting them) to re-utilize their uid's for newly registered users. The table will look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `is_deleted` int(1) unsigned NOT NULL default '0' COMMENT 'If equal to "1" then the profile has been deleted and will be re-used for new registrations', `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; 3) Write a script to shift all following user records once a previous record has been deleted. E.g. in our case when Bob (uid = 2) decides to remove his profile, we would replace his record with the record of Chris (uid = 3), so that uid of Chris becomes qual to 2 and mark (is_deleted = '1') the old record of Chris as vacant for the new users. In this case we keep the chronological order of uid's according to the registration time, so that the older users have lower uid's. Please, advice me now which way is the right way to handle the gaps in the auto_increment fields. This is just one example with users, but such cases occur very often in my programming experience. Thanks in advance!

    Read the article

  • Module Adminhtml blocks not loading

    - by David Tay
    I was working on a Magento module and it was working fine. At some point, I was trying to enable WYSIWYG in an edit form 'content' field and suddenly, my adminhtml grid and edit blocks stopped being generated. On my system are TinyMCE and Fontis FCKEditor WYSIWYG editors extensions. I'm not sure what I did wrong but my adminhtml blocks will no longer generate. Here's a dump of all the blocks from my module's adminhtml layout: array(17) { [0]=> string(4) "root" [1]=> string(4) "head" [2]=> string(13) "head.calendar" [3]=> string(14) "global_notices" [4]=> string(6) "header" [5]=> string(4) "menu" [6]=> string(11) "breadcrumbs" [7]=> string(7) "formkey" [8]=> string(12) "js_translate" [9]=> string(4) "left" [10]=> string(7) "content" [11]=> string(8) "messages" [12]=> string(2) "js" [13]=> string(6) "footer" [14]=> string(8) "profiler" [15]=> string(15) "before_body_end" [16]=> string(7) "wysiwyg" } As you can see, the last item is "wysiwyg" but on the layout output of other magento modules, there are more blocks. For example, on MathieuF's calendar extension, these are all the layout blocks: array(26) { [0]=> string(4) "root" [1]=> string(4) "head" [2]=> string(13) "head.calendar" [3]=> string(14) "global_notices" [4]=> string(6) "header" [5]=> string(4) "menu" [6]=> string(11) "breadcrumbs" [7]=> string(7) "formkey" [8]=> string(12) "js_translate" [9]=> string(4) "left" [10]=> string(7) "content" [11]=> string(8) "messages" [12]=> string(2) "js" [13]=> string(6) "footer" [14]=> string(8) "profiler" [15]=> string(15) "before_body_end" [16]=> string(7) "wysiwyg" [17]=> string(27) "adminhtml_event.grid.child0" [18]=> string(12) "ANONYMOUS_19" [19]=> string(27) "adminhtml_event.grid.child1" [20]=> string(12) "ANONYMOUS_21" [21]=> string(27) "adminhtml_event.grid.child2" [22]=> string(20) "adminhtml_event.grid" [23]=> string(12) "ANONYMOUS_24" [24]=> string(19) "ANONYMOUS_17.child1" [25]=> string(14) "content.child0" } Does anyone have any idea what's wrong? I've already tried Alan Storm's Layout and Config Viewers and cannot find any clues as to what I did wrong. Any help would be greatly appreciated.

    Read the article

  • AsyncTask, inner classes and Buttons states

    - by Intern John Smith
    For a musical application (a sequencer), I have a few buttons static ArrayList<Button> Buttonlist = new ArrayList<Button>(); Buttonlist.add(0,(Button) findViewById(R.id.case_kick1)); Buttonlist.add(1,(Button) findViewById(R.id.case_kick2)); Buttonlist.add(2,(Button) findViewById(R.id.case_kick3)); Buttonlist.add(3,(Button) findViewById(R.id.case_kick4)); Buttonlist.add(4,(Button) findViewById(R.id.case_kick5)); Buttonlist.add(5,(Button) findViewById(R.id.case_kick6)); Buttonlist.add(6,(Button) findViewById(R.id.case_kick7)); Buttonlist.add(7,(Button) findViewById(R.id.case_kick8)); that I activate through onClickListeners with for example Buttonlist.get(0).setActivated(true); I played the activated sound via a loop and ifs : if (Buttonlist.get(k).isActivated()) {mSoundManager.playSound(1); Thread.sleep(250);} The app played fine but I couldn't access the play/pause button when it was playing : I searched and found out about AsyncTasks. I have a nested class PlayPause : class PlayPause extends AsyncTask<ArrayList<Button>,Integer,Void> in which I have this : protected Void doInBackground(ArrayList<Button>... params) { for(int k=0;k<8;k++) { boolean isPlayed = false; if (Buttonlist.get(k).isActivated()) { mSoundManager.playSound(1); isPlayed = true; try {Thread.sleep(250); } catch (InterruptedException e) {e.printStackTrace();}} if(!isPlayed){ try {Thread.sleep(250);} catch (InterruptedException e) {e.printStackTrace();} } } return null; } I launch it via Play.setOnClickListener(new PlayClickListener()); with PlayClickListener : public class PlayClickListener implements OnClickListener { private Tutorial acti; public PlayClickListener(){ super(); } @SuppressWarnings("unchecked") public void onClick(View v) { if(Tutorial.play==0){ Tutorial.play=1; Tutorial.Play.setBackgroundResource(R.drawable.action_down); acti = new Tutorial(); Tutorial.PlayPause myPlayPause = acti.new PlayPause(); myLecture.execute(Tutorial.Buttonlist); } else { Tutorial.play=0; Tutorial.lecture.cancel(true); Tutorial.Play.setBackgroundResource(R.drawable.action); } } } But it doesn't work. when I click on buttons and I touch Play/Pause, I have this : 07-02 11:06:01.350: E/AndroidRuntime(7883): FATAL EXCEPTION: AsyncTask #1 07-02 11:06:01.350: E/AndroidRuntime(7883): Caused by: java.lang.NullPointerException 07-02 11:06:01.350: E/AndroidRuntime(7883): at com.Tutorial.Tutorial$PlayPause.doInBackground(Tutorial.java:603) 07-02 11:06:01.350: E/AndroidRuntime(7883): at com.Tutorial.Tutorial$PlayPause.doInBackground(Tutorial.java:1) And I don't know how to get rid of this error. Obviously, the Asynctask doesn't find the Buttonlist activated status, even if the list is static. I don't know how to access these buttons' states, isPressed doesn't work either. Thanks for reading and helping me !

    Read the article

  • Android MediaPlayer Won't Play Different Sounds

    - by cYn
    I'm making a simple app that plays a different sound according to its orientation. So if it's placed face down, a sound is played. If placed on its left side, a different sound is played. I'm having a hard time manipulating MediaPlayer correctly. My app runs fine. But it will only play one sound. For example. When I first boot up my app and place my device facing up, a sound will play. If I change its orientation, the sound will pause. But it will not start a different sound in a different orientation. BUT, if I place the device back facing up, it resumes the sound that it paused. I know I'm doing something wrong here, but I can't seem to figure it out the correct structure in using MediaPlayer and the program constantly calling it through onSensorChanged. public class MainActivity extends Activity implements SensorEventListener{ MediaPlayer mpAudioAttention; MediaPlayer mpAudioAssembly; MediaPlayer mpAudioRecall; MediaPlayer mpAudioRetreat; MediaPlayer mpAudioReveille; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); sm = (SensorManager) getSystemService(SENSOR_SERVICE); setContentView(R.layout.main); mpAudioAttention = MediaPlayer.create(this, R.raw.attention); mpAudioAssembly = MediaPlayer.create(this, R.raw.assembly); mpAudioRecall = MediaPlayer.create(this, R.raw.recall); mpAudioRetreat = MediaPlayer.create(this, R.raw.retreat); mpAudioReveille = MediaPlayer.create(this, R.raw.reveille); } public void onSensorChanged(SensorEvent event) { synchronized (this) { Log.d(tag, "onSensorChanged: " + event + ", z: " + event.values[0] + ", x: " + event.values[1] + ", y: " + event.values[2]); zViewO.setText("Orientation Z: " + event.values[0]); xViewO.setText("Orientation X: " + event.values[1]); yViewO.setText("Orientation Y: " + event.values[2]); } //face down if (event.values[2] > -11 && event.values[2] < -9){ mpAudioRetreat.start(); } else mpAudioRetreat.pause(); //face up if (event.values[2] < 11 && event.values[2] > 9){ mpAudioReveille.start(); } else mpAudioReveille.pause(); //standing if (event.values[0] > -10 && event.values[0] < -8){ mpAudioAttention.start(); } else mpAudioAttention.pause(); //left sideways if (event.values[1] < 11 && event.values[1] > 9){ mpAudioAssembly.start(); } else mpAudioAssembly.pause(); //right sideways if (event.values[1] > -11 && event.values[1] < -9){ mpAudioRecall.start(); } else mpAudioRecall.pause(); }

    Read the article

  • SQL SERVER – Installing AdventureWorks for SQL Server 2011

    - by pinaldave
    I just began with SQL Server 2011 Denali CTP1. The very first thing, I realized that there is no AdventureWorks Sample Database available for Denali. I quickly searched online and reached to Microsoft documentations where it provides information of the how to install (restore) AdventureWorks for SQL Server 2011 for Denali. Download the AdventureWorks from here. Run following script (replace your path of mdf file. CREATE DATABASE AdventureWorks2008R2 ON (FILENAME = 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_Data.mdf') FOR ATTACH_REBUILD_LOG ; When you run above script it will give you following message and you are DONE! File activation failure. The physical file name "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorks2008R2_Log.ldf" may be incorrect. New log file 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_log.ldf' was created. Converting database 'AdventureWorks2008R2' from version 679 to the current version 684. Database 'AdventureWorks2008R2' running the upgrade step from version 679 to version 680. Database 'AdventureWorks2008R2' running the upgrade step from version 680 to version 681. Database 'AdventureWorks2008R2' running the upgrade step from version 681 to version 682. Database 'AdventureWorks2008R2' running the upgrade step from version 682 to version 683. Database 'AdventureWorks2008R2' running the upgrade step from version 683 to version 684. I will soon write my experience about Denali. However, SQL Server Management Studio more started to look a like Visual Studio. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Convert OpenGL code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run into some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it right before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has different coordinations systems, and functions that alters how you must implement to code a bit. The getPickRay function on the answer linked is what I'm trying to convert. This is the part of my code that I think is giving me trouble when converting from openGl to directX Because I'm unsure on how their different coordination systems differs from one another. PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); Another thing that I am unsure about is this part: XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom right (800,600) ( or whatever resolution you would have) The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH. I removed the variables aspectRatio and zoomFactor because I assumed that they were related to some specific function of his game. To summarize it up to questions : Does the openGL coordination system differ in such a way that this equation in the first of my code examples wouldn't be valid when used in DirectX 11 ( with its respective screen coordination system)? Is the openGL method Matrix4f.transform(a, b, c) equal to the directX method c = XMVector3TransformCoord(b,a)? (where a is a matrix and b,c are vectors). Because I know when it comes to matrices order is important.

    Read the article

  • First round playing with Memcached

    - by Shaun
    To be honest I have not been very interested in the caching before I’m going to a project which would be using the multi-site deployment and high connection and concurrency and very sensitive to the user experience. That means we must cache the output data for better performance. After looked for the Internet I finally focused on the Memcached. What’s the Memcached? I think the description on its main site gives us a very good and simple explanation. Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering. Memcached is simple yet powerful. Its simple design promotes quick deployment, ease of development, and solves many problems facing large data caches. Its API is available for most popular languages. The original Memcached was built on *nix system are is being widely used in the PHP world. Although it’s not a problem to use the Memcached installed on *nix system there are some windows version available fortunately. Since we are WISC (Windows – IIS – SQL Server – C#, which on the opposite of LAMP) it would be much easier for us to use the Memcached on Windows rather than *nix. I’m using the Memcached Win X64 version provided by NorthScale. There are also the x86 version and other operation system version.   Install Memcached Unpack the Memcached file to a folder on the machine you want it to be installed, we can see that there are only 3 files and the main file should be the “memcached.exe”. Memcached would be run on the server as a service. To install the service just open a command windows and navigate to the folder which contains the “memcached.exe”, let’s say “C:\Memcached\”, and then type “memcached.exe -d install”. If you are using Windows Vista and Windows 7 system please be execute the command through the administrator role. Right-click the command item in the start menu and use “Run as Administrator”, otherwise the Memcached would not be able to be installed successfully. Once installed successful we can type “memcached.exe -d start” to launch the service. Now it’s ready to be used. The default port of Memcached is 11211 but you can change it through the command argument. You can find the help by typing “memcached -h”.   Using Memcached Memcahed has many good and ready-to-use providers for vary program language. After compared and reviewed I chose the Memcached Providers. It’s built based on another 3rd party Memcached client named enyim.com Memcached Client. The Memcached Providers is very simple to set/get the cached objects through the Memcached servers and easy to be configured through the application configuration file (aka web.config and app.config). Let’s create a console application for the demonstration and add the 3 DLL files from the package of the Memcached Providers to the project reference. Then we need to add the configuration for the Memcached server. Create an App.config file and firstly add the section on top of it. Here we need three sections: the section for Memcached Providers, for enyim.com Memcached client and the log4net. 1: <configSections> 2: <section name="cacheProvider" 3: type="MemcachedProviders.Cache.CacheProviderSection, MemcachedProviders" 4: allowDefinition="MachineToApplication" 5: restartOnExternalChanges="true"/> 6: <sectionGroup name="enyim.com"> 7: <section name="memcached" 8: type="Enyim.Caching.Configuration.MemcachedClientSection, Enyim.Caching"/> 9: </sectionGroup> 10: <section name="log4net" 11: type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> 12: </configSections> Then we will add the configuration for 3 of them in the App.config file. The Memcached server information would be defined under the enyim.com section since it will be responsible for connect to the Memcached server. Assuming I installed the Memcached on two servers with the default port, the configuration would be like this. 1: <enyim.com> 2: <memcached> 3: <servers> 4: <!-- put your own server(s) here--> 5: <add address="192.168.0.149" port="11211"/> 6: <add address="10.10.20.67" port="11211"/> 7: </servers> 8: <socketPool minPoolSize="10" maxPoolSize="100" connectionTimeout="00:00:10" deadTimeout="00:02:00"/> 9: </memcached> 10: </enyim.com> Memcached supports the multi-deployment which means you can install the Memcached on the servers as many as you need. The protocol of the Memcached responsible for routing the cached objects into the proper server. So it’s very easy to scale-out your system by Memcached. And then define the Memcached Providers configuration. The defaultExpireTime indicates how long the objected cached in the Memcached would be expired, the default value is 2000 ms. 1: <cacheProvider defaultProvider="MemcachedCacheProvider"> 2: <providers> 3: <add name="MemcachedCacheProvider" 4: type="MemcachedProviders.Cache.MemcachedCacheProvider, MemcachedProviders" 5: keySuffix="_MySuffix_" 6: defaultExpireTime="2000"/> 7: </providers> 8: </cacheProvider> The last configuration would be the log4net. 1: <log4net> 2: <!-- Define some output appenders --> 3: <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> 4: <layout type="log4net.Layout.PatternLayout"> 5: <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"/> 6: </layout> 7: </appender> 8: <!--<threshold value="OFF" />--> 9: <!-- Setup the root category, add the appenders and set the default priority --> 10: <root> 11: <priority value="WARN"/> 12: <appender-ref ref="ConsoleAppender"> 13: <filter type="log4net.Filter.LevelRangeFilter"> 14: <levelMin value="WARN"/> 15: <levelMax value="FATAL"/> 16: </filter> 17: </appender-ref> 18: </root> 19: </log4net>   Get, Set and Remove the Cached Objects Once we finished the configuration it would be very simple to consume the Memcached servers. The Memcached Providers gives us a static class named DistCache that can be used to operate the Memcached servers. Get<T>: Retrieve the cached object from the Memcached servers. If failed it will return null or the default value. Add: Add an object with a unique key into the Memcached servers. Assuming that we have an operation that retrieve the email from the name which is time consuming. This is the operation that should be cached. The method would be like this. I utilized Thread.Sleep to simulate the long-time operation. 1: static string GetEmailByNameSlowly(string name) 2: { 3: Thread.Sleep(2000); 4: return name + "@ethos.com.cn"; 5: } Then in the real retrieving method we will firstly check whether the name, email information had been searched previously and cached. If yes we will just return them from the Memcached, otherwise we will invoke the slowly method to retrieve it and then cached. 1: static string GetEmailByName(string name) 2: { 3: var email = DistCache.Get<string>(name); 4: if (string.IsNullOrEmpty(email)) 5: { 6: Console.WriteLine("==> The name/email not be in memcached so need slow loading. (name = {0})==>", name); 7: email = GetEmailByNameSlowly(name); 8: DistCache.Add(name, email); 9: } 10: else 11: { 12: Console.WriteLine("==> The name/email had been in memcached. (name = {0})==>", name); 13: } 14: return email; 15: } Finally let’s finished the calling method and execute. 1: static void Main(string[] args) 2: { 3: var name = string.Empty; 4: while (name != "q") 5: { 6: Console.Write("==> Please enter the name to find the email: "); 7: name = Console.ReadLine(); 8:  9: var email = GetEmailByName(name); 10: Console.WriteLine("==> The email of {0} is {1}.", name, email); 11: } 12: } The first time I entered “ziyanxu” it takes about 2 seconds to get the email since there’s nothing cached. But the next time I entered “ziyanxu” it returned very quickly from the Memcached.   Summary In this post I explained a bit on why we need cache, what’s Memcached and how to use it through the C# application. The example is fairly simple but hopefully demonstrated on how to use it. Memcached is very easy and simple to be used since it gives you the full opportunity to consider what, when and how to cache the objects. And when using Memcached you don’t need to consider the cache servers. The Memcached would be like a huge object pool in front of you. The next step I’m thinking now are: What kind of data should be cached? And how to determined the key? How to implement the cache as a layer on top of the business layer so that the application will not notice that the cache is there. How to implement the cache by AOP so that the business logic no need to consider the cache. I will investigate on them in the future and will share my thoughts and results.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Calling Web Service Functions Asynchronously from a Web Page

    - by SGWellens
    Over on the Asp.Net forums where I moderate, a user had a problem calling a Web Service from a web page asynchronously. I tried his code on my machine and was able to reproduce the problem. I was able to solve his problem, but only after taking the long scenic route through some of the more perplexing nuances of Web Services and Proxies. Here is the fascinating story of that journey. Start with a simple Web Service     public class Service1 : System.Web.Services.WebService    {        [WebMethod]        public string HelloWorld()        {            // sleep 10 seconds            System.Threading.Thread.Sleep(10 * 1000);            return "Hello World";        }    } The 10 second delay is added to make calling an asynchronous function more apparent. If you don't call the function asynchronously, it takes about 10 seconds for the page to be rendered back to the client. If the call is made from a Windows Forms application, the application freezes for about 10 seconds. Add the web service to a web site. Right-click the project and select "Add Web Reference…" Next, create a web page to call the Web Service. Note: An asp.net web page that calls an 'Async' method must have the Async property set to true in the page's header: <%@ Page Language="C#"          AutoEventWireup="true"          CodeFile="Default.aspx.cs"          Inherits="_Default"           Async='true'  %> Here is the code to create the Web Service proxy and connect the event handler. Shrewdly, we make the proxy object a member of the Page class so it remains instantiated between the various events. public partial class _Default : System.Web.UI.Page {    localhost.Service1 MyService;  // web service proxy     // ---- Page_Load ---------------------------------     protected void Page_Load(object sender, EventArgs e)    {        MyService = new localhost.Service1();        MyService.HelloWorldCompleted += EventHandler;          } Here is the code to invoke the web service and handle the event:     // ---- Async and EventHandler (delayed render) --------------------------     protected void ButtonHelloWorldAsync_Click(object sender, EventArgs e)    {        // blocks        ODS("Pre HelloWorldAsync...");        MyService.HelloWorldAsync();        ODS("Post HelloWorldAsync");    }    public void EventHandler(object sender, localhost.HelloWorldCompletedEventArgs e)    {        ODS("EventHandler");        ODS("    " + e.Result);    }     // ---- ODS ------------------------------------------------    //    // Helper function: Output Debug String     public static void ODS(string Msg)    {        String Out = String.Format("{0}  {1}", DateTime.Now.ToString("hh:mm:ss.ff"), Msg);        System.Diagnostics.Debug.WriteLine(Out);    } I added a utility function I use a lot: ODS (Output Debug String). Rather than include the library it is part of, I included it in the source file to keep this example simple. Fire up the project, open up a debug output window, press the button and we get this in the debug output window: 11:29:37.94 Pre HelloWorldAsync... 11:29:37.94 Post HelloWorldAsync 11:29:48.94 EventHandler 11:29:48.94 Hello World   Sweet. The asynchronous call was made and returned immediately. About 10 seconds later, the event handler fires and we get the result. Perfect….right? Not so fast cowboy. Watch the browser during the call: What the heck? The page is waiting for 10 seconds. Even though the asynchronous call returned immediately, Asp.Net is waiting for the event to fire before it renders the page. This is NOT what we wanted. I experimented with several techniques to work around this issue. Some may erroneously describe my behavior as 'hacking' but, since no ingesting of Twinkies was involved, I do not believe hacking is the appropriate term. If you examine the proxy that was automatically created, you will find a synchronous call to HelloWorld along with an additional set of methods to make asynchronous calls. I tried the other asynchronous method supplied in the proxy:     // ---- Begin and CallBack ----------------------------------     protected void ButtonBeginHelloWorld_Click(object sender, EventArgs e)    {        ODS("Pre BeginHelloWorld...");        MyService.BeginHelloWorld(AsyncCallback, null);        ODS("Post BeginHelloWorld");    }    public void AsyncCallback(IAsyncResult ar)    {        String Result = MyService.EndHelloWorld(ar);         ODS("AsyncCallback");        ODS("    " + Result);    } The BeginHelloWorld function in the proxy requires a callback function as a parameter. I tested it and the debug output window looked like this: 04:40:58.57 Pre BeginHelloWorld... 04:40:58.57 Post BeginHelloWorld 04:41:08.58 AsyncCallback 04:41:08.58 Hello World It works the same as before except for one critical difference: The page rendered immediately after the function call. I was worried the page object would be disposed after rendering the page but the system was smart enough to keep the page object in memory to handle the callback. Both techniques have a use: Delayed Render: Say you want to verify a credit card, look up shipping costs and confirm if an item is in stock. You could have three web service calls running in parallel and not render the page until all were finished. Nice. You can send information back to the client as part of the rendered page when all the services are finished. Immediate Render: Say you just want to start a service running and return to the client. You can do that too. However, the page gets sent to the client before the service has finished running so you will not be able to update parts of the page when the service finishes running. Summary: YourFunctionAsync() and an EventHandler will not render the page until the handler fires. BeginYourFunction() and a CallBack function will render the page as soon as possible. I found all this to be quite interesting and did a lot of searching and researching for documentation on this subject….but there isn't a lot out there. The biggest clues are the parameters that can be sent to the WSDL.exe program: http://msdn.microsoft.com/en-us/library/7h3ystb6(VS.100).aspx Two parameters are oldAsync and newAsync. OldAsync will create the Begin/End functions; newAsync will create the Async/Event functions. Caveat: I haven't tried this but it was stated in this article. I'll leave confirming this as an exercise for the student J. Included Code: I'm including the complete test project I created to verify the findings. The project was created with VS 2008 SP1. There is a solution file with 3 projects, the 3 projects are: Web Service Asp.Net Application Windows Forms Application To decide which program runs, you right-click a project and select "Set as Startup Project". I created and played with the Windows Forms application to see if it would reveal any secrets. I found that in the Windows Forms application, the generated proxy did NOT include the Begin/Callback functions. Those functions are only generated for Asp.Net pages. Probably for the reasons discussed earlier. Maybe those Microsoft boys and girls know what they are doing. I hope someone finds this useful. Steve Wellens

    Read the article

  • Scottish Visual Studio 2010 Launch event with Jason Zander

    - by Martin Hinshelwood
    Microsoft are hosting a launch event for Visual Studio 2010 on Friday 16th April in Edinburgh. The have managed to convince one of the head honchos from the Visual Studio product team to come to Scotland. With Scott Guthrie last week in Glasgow and now Jason Zander, Global General Manager for Visual Studio will be arriving in Edinburgh for the Launch event. There will be two speakers for the event, Jason will be up first and will be doing a session on Windows, Web, Cloud and Windows Phone 7 development with Visual Studio 2010. Second up is Giles Davis the UK’s Technical Specialist for Visual Studio ALM (formally Visual Studio Team System) who will be introducing the new Visual Studio 2010 Developer and tester collaboration features. LAUNCH AGENDA: 9.30am – 10.00am Arrival 10.00am - 11.30am Keynote & Q&A - Jason Zander, Global GM for Visual Studio 11.30am - 12.00pm Break 12.00pm - 1.00pm Developer & Tester Collaboration with Visual Studio 2010 - Giles Davies, Technical Specialist 1.00pm - 1.30pm Lunch DATE:              Friday, 16th April 2010 LOCATION: Microsoft Edinburgh, Waverley Gate, 2-4 Waterloo Place, Edinburgh, EH1 3EG I think Jason will be hanging out for the afternoon to answer questions and meet everyone. f you would like to attend, please email Nathan Davies on [email protected] with your name, company and email address   Technorati Tags: VS2010,TFS2010,Visual Studio,Visual Studio 2010

    Read the article

  • Quotas - Using quotas on ZFSSA shares and projects and users

    - by Steve Tunstall
    So you don't want your users to fill up your entire storage pool with their MP3 files, right? Good idea to make some quotas. There's some good tips and tricks here, including a helpful workflow (a script) that will allow you to set a default quota on all of the users of a share at once. Let's start with some basics. I mad a project called "small" and inside it I made a share called "Share1". You can set quotas on the project level, which will affect all of the shares in it, or you can do it on the share level like I am here. Go the the share's General property page. First, I'm using a Windows client, so I need to make sure I have my SMB mountpoint. Do you know this trick yet? Go to the Protocol page of the share. See the SMB section? It needs a resource name to make the UNC path for the SMB (Windows) users. You do NOT have to type this name in for every share you make! Do this at the Project level. Before you make any shares, go to the Protocol properties of the Project, and set the SMB Resource name to "On". This special code will automatically make the SMB resource name of every share in the project the same as the share name. Note the UNC path name I got below. Since I did this at the Project level, I didn't have to lift a finger for it to work on every share I make in this project. Simple. So I have now mapped my Windows "Z:" drive to this Share1. I logged in as the user "Joe". Note that my computer shows my Z: drive as 34GB, which is the entire size of my Pool that this share is in. Right now, Joe could fill this drive up and it would fill up my pool.  Now, go back to the General properties of Share1. In the "Space Usage" area, over on the right, click on the "Show All" text under the Users & Groups section. Sure enough, Joe and some other users are in here and have some data. Note this is also a handy window to use just to see how much space your users are using in any given share.  Ok, Joe owes us money from lunch last week, so we want to give him a quota of 100MB. Type his name in the Users box. Notice how it now shows you how much data he's currently using. Go ahead and give him a 100M quota and hit the Apply button. If I go back to "Show All", I can see that Joe now has a quota, and no one else does. Sure enough, as soon as I refresh my screen back on Joe's client, he sees that his Z: drive is now only 100MB, and he's more than half way full.  That was easy enough, but what if you wanted to make the whole share have a quota, so that the share itself, no matter who uses it, can only grow to a certain size? That's even easier. Just use the Quota box on the left hand side. Here, I use a Quota on the share of 300MB.  So now I log off as Joe, and log in as Steve. Even though Steve does NOT have a quota, it is showing my Z: drive as 300MB. This would effect anyone, INCLUDING the ROOT user, becuase you specified the Quota to be on the SHARE, not on a person.  Note that back in the Share, if you click the "Show All" text, the window does NOT show Steve, or anyone else, to have a quota of 300MB. Yet we do, because it's on the share itself, not on any user, so this panel does not see that. Ok, here is where it gets FUN.... Let's say you do NOT want a quota on the SHARE, because you want SOME people, like Root and yourself, to have FULL access to it and you want the ability to fill the whole thing up if you darn well feel like it. HOWEVER, you want to give the other users a quota. HOWEVER you have, say, 200 users, and you do NOT feel like typing in each of their names and giving them each a quota, and they are not all members of a AD global group you could use or anything like that.  Hmmmmmm.... No worries, mate. We have a handy-dandy script that can do this for us. Now, this script was written a few years back by Tim Graves, one of our ZFSSA engineers out of the UK. This is not my script. It is NOT supported by Oracle support in any way. It does work fine with the 2011.1.4 code as best as I can tell, but Oracle, and I, are NOT responsible for ANYTHING that you do with this script. Furthermore, I will NOT give you this script, so do not ask me for it. You need to get this from your local Oracle storage SC. I will give it to them. I want this only going to my fellow SCs, who can then work with you to have it and show you how it works.  Here's what it does...Once you add this workflow to the Maintenance-->Workflows section, you click it once to run it. Nothing seems to happen at this point, but something did.   Go back to any share or project. You will see that you now have four new, custom properties on the bottom.  Do NOT touch the bottom two properties, EVER. Only touch the top two. Here, I'm going to give my users a default quota of about 40MB each. The beauty of this script is that it will only effect users that do NOT already have any kind of personal quota. It will only change people who have no quota at all. It does not effect the Root user.  After I hit Apply on the Share screen. Nothing will happen until I go back and run the script again. The first time you run it, it creates the custom properties. The second and all subsequent times you run it, it checks the shares for any users, and applies your quota number to each one of them, UNLESS they already have one set. Notice in the readout below how it did NOT apply to my Joe user, since Joe had a quota set.  Sure enough, when I go back to the "Show All" in the share properties, all of the users who did not have a quota, now have one for 39.1MB. Hmmm... I did my math wrong, didn't I?    That's OK, I'll just change the number of the Custom Default quota again. Here, I am adding a zero on the end.  After I click Apply, and then run the script again, all of my users, except Joe, now have a quota of 391MB  You can customize a person at any time. Here, I took the Steve user, and specifically gave him a Quota of zero. Now when I run the script again, he is different from the rest, so he is no longer effected by the script. Under Show All, I see that Joe is at 100, and Steve has no Quota at all. I can do this all day long. es, you will have to re-run the script every time new users get added. The script only applies the default quota to users that are present at the time the script is ran. However, it would be a simple thing to schedule the script to run each night, or to make an alert to run the script when certain events occur.  For you power users, if you ever want to delete these custom properties and remove the script completely, you will find these properties under the "Schema" section under the Shares section. You can remove them here. There's no need to, however, they don't hurt a thing if you just don't use them.  I hope these tips have helped you out there. Quotas can be fun. 

    Read the article

  • Entity Framework v1 &ndash; tips and Tricks Part 3

    - by Rohit Gupta
    General Tips on Entity Framework v1 & Linq to Entities: ToTraceString() If you need to know the underlying SQL that the EF generates for a Linq To Entities query, then use the ToTraceString() method of the ObjectQuery class. (or use LINQPAD) Note that you need to cast the LINQToEntities query to ObjectQuery before calling TotraceString() as follows: 1: string efSQL = ((ObjectQuery)from c in ctx.Contact 2: where c.Address.Any(a => a.CountryRegion == "US") 3: select c.ContactID).ToTraceString(); ================================================================================ MARS or MultipleActiveResultSet When you create a EDM Model (EDMX file) from the database using Visual Studio, it generates a connection string with the same name as the name of the EntityContainer in CSDL. In the ConnectionString so generated it sets the MultipleActiveResultSet attribute to true by default. So if you are running the following query then it streams multiple readers over the same connection: 1: using (BAEntities context = new BAEntities()) 2: { 3: var cons = 4: from con in context.Contacts 5: where con.FirstName == "Jose" 6: select con; 7: foreach (var c in cons) 8: { 9: if (c.AddDate < new System.DateTime(2007, 1, 1)) 10: { 11: c.Addresses.Load(); 12: } 13: } 14: } ================================================================================= Explicitly opening and closing EntityConnection When you call ToList() or foreach on a LINQToEntities query the EF automatically closes the connection after all the records from the query have been consumed. Thus if you need to run many LINQToEntities queries over the same connection then explicitly open and close the connection as follows: 1: using (BAEntities context = new BAEntities()) 2: { 3: context.Connection.Open(); 4: var cons = from con in context.Contacts where con.FirstName == "Jose" 5: select con; 6: var conList = cons.ToList(); 7: var allCustomers = from con in context.Contacts.OfType<Customer>() 8: select con; 9: var allcustList = allCustomers.ToList(); 10: context.Connection.Close(); 11: } ====================================================================== Dispose ObjectContext only if required After you retrieve entities using the ObjectContext and you are not explicitly disposing the ObjectContext then insure that your code does consume all the records from the LinqToEntities query by calling .ToList() or foreach statement, otherwise the the database connection will remain open and will be closed by the garbage collector when it gets to dispose the ObjectContext. Secondly if you are making updates to the entities retrieved using LinqToEntities then insure that you dont inadverdently dispose of the ObjectContext after the entities are retrieved and before calling .SaveChanges() since you need the SAME ObjectContext to keep track of changes made to the Entities (by using ObjectStateEntry objects). So if you do need to explicitly dispose of the ObjectContext do so only after calling SaveChanges() and only if you dont need to change track the entities retrieved any further. ======================================================================= SQL InjectionAttacks under control with EFv1 LinqToEntities and LinqToSQL queries are parameterized before they are sent to the DB hence they are not vulnerable to SQL Injection attacks. EntitySQL may be slightly vulnerable to attacks since it does not use parameterized queries. However since the EntitySQL demands that the query be valid Entity SQL syntax and valid native SQL syntax at the same time. So the only way one can do a SQLInjection Attack is by knowing the SSDL of the EDM Model and be able to write the correct EntitySQL (note one cannot append regular SQL since then the query wont be a valid EntitySQL syntax) and append it to a parameter. ====================================================================== Improving Performance You can convert the EntitySets and AssociationSets in a EDM Model into precompiled Views using the edmgen utility. for e.g. the Customer Entity can be converted into a precompiled view using edmgen and all LinqToEntities query against the contaxt.Customer EntitySet will use the precompiled View instead of the EntitySet itself (the same being true for relationships (EntityReference & EntityCollections of a Entity)). The advantage being that when using precompiled views the performance will be much better. The syntax for generating precompiled views for a existing EF project is : edmgen /mode:ViewGeneration /inssdl:BAModel.ssdl /incsdl:BAModel.csdl /inmsl:BAModel.msl /p:Chap14.csproj Note that this will only generate precompiled views for EntitySets and Associations and not for existing LinqToEntities queries in the project.(for that use CompiledQuery.Compile<>) Secondly if you have a LinqToEntities query that you need to run multiple times, then one should precompile the query using CompiledQuery.Compile method. The CompiledQuery.Compile<> method accepts a lamda expression as a parameter, which denotes the LinqToEntities query  that you need to precompile. The following is a example of a lamda that we can pass into the CompiledQuery.Compile() method 1: Expression<Func<BAEntities, string, IQueryable<Customer>>> expr = (BAEntities ctx1, string loc) => 2: from c in ctx1.Contacts.OfType<Customer>() 3: where c.Reservations.Any(r => r.Trip.Destination.DestinationName == loc) 4: select c; Then we call the Compile Query as follows: 1: var query = CompiledQuery.Compile<BAEntities, string, IQueryable<Customer>>(expr); 2:  3: using (BAEntities ctx = new BAEntities()) 4: { 5: var loc = "Malta"; 6: IQueryable<Customer> custs = query.Invoke(ctx, loc); 7: var custlist = custs.ToList(); 8: foreach (var item in custlist) 9: { 10: Console.WriteLine(item.FullName); 11: } 12: } Note that if you created a ObjectQuery or a Enitity SQL query instead of the LINQToEntities query, you dont need precompilation for e.g. 1: An Example of EntitySQL query : 2: string esql = "SELECT VALUE c from Contacts AS c where c is of(BAGA.Customer) and c.LastName = 'Gupta'"; 3: ObjectQuery<Customer> custs = CreateQuery<Customer>(esql); 1: An Example of ObjectQuery built using ObjectBuilder methods: 2: from c in Contacts.OfType<Customer>().Where("it.LastName == 'Gupta'") 3: select c This is since the Query plan is cached and thus the performance improves a bit, however since the ObjectQuery or EntitySQL query still needs to materialize the results into Entities hence it will take the same amount of performance hit as with LinqToEntities. However note that not ALL EntitySQL based or QueryBuilder based ObjectQuery plans are cached. So if you are in doubt always create a LinqToEntities compiled query and use that instead ============================================================ GetObjectStateEntry Versus GetObjectByKey We can get to the Entity being referenced by the ObjectStateEntry via its Entity property and there are helper methods in the ObjectStateManager (osm.TryGetObjectStateEntry) to get the ObjectStateEntry for a entity (for which we know the EntityKey). Similarly The ObjectContext has helper methods to get an Entity i.e. TryGetObjectByKey(). TryGetObjectByKey() uses GetObjectStateEntry method under the covers to find the object, however One important difference between these 2 methods is that TryGetObjectByKey queries the database if it is unable to find the object in the context, whereas TryGetObjectStateEntry only looks in the context for existing entries. It will not make a trip to the database ============================================================= POCO objects with EFv1: To create POCO objects that can be used with EFv1. We need to implement 3 key interfaces: IEntityWithKey IEntityWithRelationships IEntityWithChangeTracker Implementing IEntityWithKey is not mandatory, but if you dont then we need to explicitly provide values for the EntityKey for various functions (for e.g. the functions needed to implement IEntityWithChangeTracker and IEntityWithRelationships). Implementation of IEntityWithKey involves exposing a property named EntityKey which returns a EntityKey object. Implementation of IEntityWithChangeTracker involves implementing a method named SetChangeTracker since there can be multiple changetrackers (Object Contexts) existing in memory at the same time. 1: public void SetChangeTracker(IEntityChangeTracker changeTracker) 2: { 3: _changeTracker = changeTracker; 4: } Additionally each property in the POCO object needs to notify the changetracker (objContext) that it is updating itself by calling the EntityMemberChanged and EntityMemberChanging methods on the changeTracker. for e.g.: 1: public EntityKey EntityKey 2: { 3: get { return _entityKey; } 4: set 5: { 6: if (_changeTracker != null) 7: { 8: _changeTracker.EntityMemberChanging("EntityKey"); 9: _entityKey = value; 10: _changeTracker.EntityMemberChanged("EntityKey"); 11: } 12: else 13: _entityKey = value; 14: } 15: } 16: ===================== Custom Property ==================================== 17:  18: [EdmScalarPropertyAttribute(IsNullable = false)] 19: public System.DateTime OrderDate 20: { 21: get { return _orderDate; } 22: set 23: { 24: if (_changeTracker != null) 25: { 26: _changeTracker.EntityMemberChanging("OrderDate"); 27: _orderDate = value; 28: _changeTracker.EntityMemberChanged("OrderDate"); 29: } 30: else 31: _orderDate = value; 32: } 33: } Finally you also need to create the EntityState property as follows: 1: public EntityState EntityState 2: { 3: get { return _changeTracker.EntityState; } 4: } The IEntityWithRelationships involves creating a property that returns RelationshipManager object: 1: public RelationshipManager RelationshipManager 2: { 3: get 4: { 5: if (_relManager == null) 6: _relManager = RelationshipManager.Create(this); 7: return _relManager; 8: } 9: } ============================================================ Tip : ProviderManifestToken – change EDMX File to use SQL 2008 instead of SQL 2005 To use with SQL Server 2008, edit the EDMX file (the raw XML) changing the ProviderManifestToken in the SSDL attributes from "2005" to "2008" ============================================================= With EFv1 we cannot use Structs to replace a anonymous Type while doing projections in a LINQ to Entities query. While the same is supported with LINQToSQL, it is not with LinqToEntities. For e.g. the following is not supported with LinqToEntities since only parameterless constructors and initializers are supported in LINQ to Entities. (the same works with LINQToSQL) 1: public struct CompanyInfo 2: { 3: public int ID { get; set; } 4: public string Name { get; set; } 5: } 6: var companies = (from c in dc.Companies 7: where c.CompanyIcon == null 8: select new CompanyInfo { Name = c.CompanyName, ID = c.CompanyId }).ToList(); ;

    Read the article

  • Start Debugging in Visual Studio

    - by Daniel Moth
    Every developer is familiar with hitting F5 and debugging their application, which starts their app with the Visual Studio debugger attached from the start (instead of attaching later). This is one way to achieve step 1 of the Live Debugging process. Hitting F5, F11, Ctrl+F10 and the other ways to start the process under the debugger is covered in this MSDN "How To". The way you configure the debugging experience, before you hit F5, is by selecting the "Project" and then the "Properties" menu (Alt+F7 on my keyboard bindings). Dependent on your project type there are different options, but if you browse to the Debug (or Debugging) node in the properties page you'll have a way to select local or remote machine debugging, what debug engines to use, command line arguments to use during debugging etc. Currently the .NET and C++ project systems are different, but one would hope that one day they would be unified to use the same mechanism and UI (I don't work on that product team so I have no knowledge of whether that is a goal or if it will ever happen). Personally I like the C++ one better, here is what it looks like (and it is described on this MSDN page): If you were following along in the "Attach to Process" blog post, the equivalent to the "Select Code Type" dialog is the "Debugger Type" dropdown: that is how you change the debug engine. Some of the debugger properties options appear on the standard toolbar in VS. With Visual Studio 11, the Debug Type option has been added to the toolbar If you don't see that in your installation, customize the toolbar to show it - VS 11 tends to be conservative in what you see by default, especially for the non-C++ Visual Studio profiles. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Lots of http HEAD requests originating from porn sites

    - by Don Corley
    My access log on my web server has a ton of http HEAD requests coming from porn sites. What are HEAD requests and are they doing something bad with my site? Here is an excerpt from my log: (valid request) 96.251.177.249 - - [02/Jan/2011:23:42:25 -0800] "POST /ajax HTTP/1.1" 200 0 "http://www.mywebsite.com/abc.html" "Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44 Safari/534.7" 80.153.114.208 - - [02/Jan/2011:23:43:11 -0800] "HEAD / HTTP/1.0" 302 185 "http://www.somepornsite.com" "Mozilla/5.0 (X11; U; Linux i686; it-IT; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.25 (jaunty) Firefox/3.8" 80.153.114.208 - - [02/Jan/2011:23:43:11 -0800] "HEAD /tourappxsl HTTP/1.0" 200 16871 "http://www.somepornsite.com" "Mozilla/5.0 (X11; U; Linux i686; it-IT; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.25 (jaunty) Firefox/3.8" I changed only the web addresses in this log. Thanks for any ideas, Don

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >