Search Results

Search found 24515 results on 981 pages for '24 bit'.

Page 153/981 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Payment for update OS X in business environment

    - by MrTJ
    I have to update some Mac work stations in a business environment from Snow Leopard (10.6) to Lion (10.7). I found that the way is to purchase it from App Store (for 24 EUR). Now the problem is that all my users of the workstations used their own Apple ID to sign in in App Store and thus they have their private credit card information bounded to the account and I can not require them to pay the price of the update. How can I pay for the update with a company credit card?

    Read the article

  • How can I make the Windows VPN route selective traffic (by destination network)?

    - by Legooolas
    I want to use a windows VPN but only for a particular network, so that it doesn't take over my entire network connection. e.g., Instead of the VPN becoming the default route, make it only the route for 192.168.123.0/24 (I can see that there is a solution for this for Ubuntu in this question, but sometimes I have to do this on Windows too) Can this be automated so that whenever I connect to the VPN it does this?

    Read the article

  • HD video is slower than audio output

    - by Star
    I have an HD video files (1920x1080 H.264 DUAL AUDIO FLAC) file type: MKV file size: 1.25 GB file length: 24 minutes the problem is the video output is not synchronized with audio output, something slow too much sometime it gets too fast I tried running it on Windows Media Player , Media Player Classic , and a few other players, but the result is the same Additional Info: for device information I'm on LG S510 labtop

    Read the article

  • ruby on rails configuration

    - by Themasterhimself
    Im using the following guide for getting started with rails for ubuntu 9.10. http://guides.rails.info/getting_started.html I have installed both ruby and gem. gokul@gokul-laptop:~$ ruby -v ruby 1.8.7 (2009-06-12 patchlevel 174) [i486-linux] gokul@gokul-laptop:~$ gem -v 1.3.6 gokul@gokul-laptop:~$ For rails, gokul@gokul-laptop:~$sudo gem install rails doesnt seem to give any response. so used the synaptic package manager for installing it. And it seems to have installed correctly. gokul@gokul-laptop:~$ rails Usage: /usr/bin/rails /path/to/your/app [options] Options: -r, --ruby=path Path to the Ruby binary of your choice (otherwise scripts use env, dispatchers current path). Default: /usr/bin/ruby1.8 -d, --database=name Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite2/sqlite3/frontbase/ibm_db). Default: sqlite3 -D, --with-dispatchers Add CGI/FastCGI/mod_ruby dispatches code to generated application skeleton Default: false --freeze Freeze Rails in vendor/rails from the gems generating the skeleton Default: false -m, --template=path Use an application template that lives at path (can be a filesystem path or URL). Default: (none) Rails Info: -v, --version Show the Rails version number and quit. -h, --help Show this help message and quit. General Options: -p, --pretend Run but do not make any changes. -f, --force Overwrite files that already exist. -s, --skip Skip files that already exist. -q, --quiet Suppress normal output. -t, --backtrace Debugging: show backtrace on errors. -c, --svn Modify files with subversion. (Note: svn must be in path) -g, --git Modify files with git. (Note: git must be in path) Description: The 'rails' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going. gokul@gokul-laptop:~$ app folder is created with all the proper folders. The problem starts with the following commands... gokul@gokul-laptop:~$ sudo gem install bundler [sudo] password for gokul: Successfully installed bundler-0.9.24 1 gem installed Installing ri documentation for bundler-0.9.24... Installing RDoc documentation for bundler-0.9.24... gokul@gokul-laptop:~$ bundle install Could not locate Gemfile gokul@gokul-laptop:~$ coming to the database, the default sqlite3 seems to have installed correctly. gokul@gokul-laptop:~$ sqlite3 SQLite version 3.6.16 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite The welcome aboard page is not being able to be found at (http://localhost:3000) after executing the following commands... gokul@gokul-laptop:~/Desktop$ rails blog create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop$ cd blog gokul@gokul-laptop:~/Desktop/blog$ rake db:create (in /home/gokul/Desktop/blog) gokul@gokul-laptop:~/Desktop/blog$ rails server create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop/blog$ hope some one can help me with this...

    Read the article

  • How to match ColdFusion encryption with Java 1.4.2?

    - by JohnTheBarber
    * sweet - thanks to Edward Smith for the CF Technote that indicated the key from ColdFusion was Base64 encoded. See generateKey() for the 'fix' My task is to use Java 1.4.2 to match the results a given ColdFusion code sample for encryption. Known/given values: A 24-byte key A 16-byte salt (IVorSalt) Encoding is Hex Encryption algorithm is AES/CBC/PKCS5Padding A sample clear-text value The encrypted value of the sample clear-text after going through the ColdFusion code Assumptions: Number of iterations not specified in the ColdFusion code so I assume only one iteration 24-byte key so I assume 192-bit encryption Given/working ColdFusion encryption code sample: <cfset ThisSalt = "16byte-salt-here"> <cfset ThisAlgorithm = "AES/CBC/PKCS5Padding"> <cfset ThisKey = "a-24byte-key-string-here"> <cfset thisAdjustedNow = now()> <cfset ThisDateTimeVar = DateFormat( thisAdjustedNow , "yyyymmdd" )> <cfset ThisDateTimeVar = ThisDateTimeVar & TimeFormat( thisAdjustedNow , "HHmmss" )> <cfset ThisTAID = ThisDateTimeVar & "|" & someOtherData> <cfset ThisTAIDEnc = Encrypt( ThisTAID , ThisKey , ThisAlgorithm , "Hex" , ThisSalt)> My Java 1.4.2 encryption/decryption code swag: package so.example; import java.security.*; import javax.crypto.Cipher; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.SecretKeySpec; import org.apache.commons.codec.binary.*; public class SO_AES192 { private static final String _AES = "AES"; private static final String _AES_CBC_PKCS5Padding = "AES/CBC/PKCS5Padding"; private static final String KEY_VALUE = "a-24byte-key-string-here"; private static final String SALT_VALUE = "16byte-salt-here"; private static final int ITERATIONS = 1; private static IvParameterSpec ivParameterSpec; public static String encryptHex(String value) throws Exception { Key key = generateKey(); Cipher c = Cipher.getInstance(_AES_CBC_PKCS5Padding); ivParameterSpec = new IvParameterSpec(SALT_VALUE.getBytes()); c.init(Cipher.ENCRYPT_MODE, key, ivParameterSpec); String valueToEncrypt = null; String eValue = value; for (int i = 0; i < ITERATIONS; i++) { // valueToEncrypt = SALT_VALUE + eValue; // pre-pend salt - Length > sample length valueToEncrypt = eValue; // don't pre-pend salt Length = sample length byte[] encValue = c.doFinal(valueToEncrypt.getBytes()); eValue = Hex.encodeHexString(encValue); } return eValue; } public static String decryptHex(String value) throws Exception { Key key = generateKey(); Cipher c = Cipher.getInstance(_AES_CBC_PKCS5Padding); ivParameterSpec = new IvParameterSpec(SALT_VALUE.getBytes()); c.init(Cipher.DECRYPT_MODE, key, ivParameterSpec); String dValue = null; char[] valueToDecrypt = value.toCharArray(); for (int i = 0; i < ITERATIONS; i++) { byte[] decordedValue = Hex.decodeHex(valueToDecrypt); byte[] decValue = c.doFinal(decordedValue); // dValue = new String(decValue).substring(SALT_VALUE.length()); // when salt is pre-pended dValue = new String(decValue); // when salt is not pre-pended valueToDecrypt = dValue.toCharArray(); } return dValue; } private static Key generateKey() throws Exception { // Key key = new SecretKeySpec(KEY_VALUE.getBytes(), _AES); // this was wrong Key key = new SecretKeySpec(new BASE64Decoder().decodeBuffer(keyValueString), _AES); // had to un-Base64 the 'known' 24-byte key. return key; } } I cannot create a matching encrypted value nor decrypt a given encrypted value. My guess is it's something to do with how I'm handling the initial vector/salt. I'm not very crypto-savvy but I'm thinking I should be able to take the sample clear-text and produce the same encrypted value in Java as ColdFusion produced. I am able to encrypt/decrypt my own data with my Java code (so I'm consistent) but I cannot match nor decrypt the ColdFusion sample encrypted value. I have access to a local webservice that can test the encrypted output. The given ColdFusion output sample passes/decrypts fine (of course). If I try to decrypt the same sample with my Java code (using the actual key and salt) I get a "Given final block not properly padded" error. I get the same net result when I pass my attempt at encryption (using the actual key and salt) to the test webservice. Any Ideas?

    Read the article

  • Unchecked call to compareTo

    - by Dave Jarvis
    Background Create a Map that can be sorted by value. Problem The code executes as expected, but does not compile cleanly: http://pastebin.com/bWhbHQmT The syntax for passing Comparable as a generic parameter along to the Map.Entry<K, V> (where V must be Comparable?) -- so that the (Comparable) typecast shown in the warning can be dropped -- eludes me. Warning Compiler's cantankerous complaint: SortableValueMap.java:24: warning: [unchecked] unchecked call to compareTo(T) as a member of the raw type java.lang.Comparable return ((Comparable)entry1.getValue()).compareTo( entry2.getValue() ); Question How can the code be changed to compile without any warnings (without suppressing them while compiling with -Xlint:unchecked)? Related TreeMap sort by value How to sort a Map on the values in Java? http://paaloliver.wordpress.com/2006/01/24/sorting-maps-in-java/ Thank you!

    Read the article

  • how to send data to server using python

    - by Apache
    hi experts, how data can be send to the server, for example i retrieve MAC address, so i want send to the server ( i.e 211.21.24.43:8080/data?mac=00-0C-F1-56-98-AD i found snippet from internet as below from urllib2 import Request, urlopen from binascii import b2a_base64 def b64open(url, postdata): req = Request(url, b2a_base64(postdata), headers={'Content-Transfer-Encoding': 'base64'}) return urlopen(req) conn = b64open("http://211.21.24.43:8080/data","mac=00-0C-F1-56-98-AD") but when run, File "send2.py", line 8 SyntaxError: Non-ASCII character '\xc3' in file send2.py on line 8, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details can anyone help me how send data to the server thanks in advance

    Read the article

  • Referencing ASP.net textbox data in JavaScripts

    - by GoldenEarring
    I'm interested in making an interactive 3D pie chart using JavaScript and ASP.net controls for a webpage. Essentially, I want to make an interactive version of the chart here: https://google-developers.appspot.com/chart/interactive/docs/gallery/piechart#3D I want to have 5 ASP.net textboxes where the user enters data and then submits it, and the chart adjusts according to what the user enters. I understand using ASP.net controls with JS is probably not the most effective way to go about it, but I would really appreciate if someone could share how doing this would be possible. I really don't know where to begin. Thanks for any help! <%@ Page Language="C#" %> <!DOCTYPE html> <script runat="server"> void btn1_Click(object sender, EventArgs e) { double s = 0.0; double b = 0; double g = 0.0f; double c = 0.0f; double h = 0.0f; s = double.Parse(txtWork.Text); b = double.Parse(txtEat.Text); g = double.Parse(txtCommute.Text); c = double.Parse(txtWatchTV.Text); h = double.Parse(txtSleep.Text); double total = s + b + g + c + h; if (total != 24) { lblError.Text = "Warning! A day has 24 hours"; } if (total == 24) { lblError.Text = string.Empty; } } </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> <script type="text/javascript" src="https://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("visualization", "1", { packages: ["corechart"] }); google.setOnLoadCallback(drawChart); function drawChart() { var data = google.visualization.arrayToDataTable([ ['Task', 'Hours per Day'], ['Work', 11], ['Eat', 2], ['Commute', 2], ['Watch TV', 2], ['Sleep', 7] ]); var options = { title: 'My Daily Activities', is3D: true, }; var chart = new google.visualization.PieChart(document.getElementById('piechart_3d')); chart.draw(data, options); } var data = new google.visualization.DataTable(); var txtWork = document.getElementById('<%=txtWork.ClientID%>') txtEat = document.getElementById('<%=txtEat.ClientID%>') txtCommute = document.getElementById('<%=txtCommute.ClientID%>') txtWatchTV = document.getElementById('<%=txtWatchTV.ClientID%>') txtSleep = document.getElementById('<%=txtSleep.ClientID%>'); var workvalue = parseInt(txtWork, 10) var eatvalue = parseInt(txtEat, 10) var commutevalue = parseInt(txtCommute, 10) var watchtvvalue = parseInt(txtWatchTV, 10) var sleepvalue = parseInt(txtSleep, 10) // Declare columns data.addColumn('string', 'Task'); data.addColumn('Number', 'Hours per day'); // Add data. data.addRows([ ['Work', workvalue], ['Eat', eatvalue], ['Commute', commutevalue], ['Watch TV', watchtvvalue], ['Sleep', sleepvalue], ]); </script> </head> <body> <form id="form1" runat="server"> <div id="piechart_3d" style="width: 900px; height: 500px;"> </div> <asp:Label ID="lblError" runat="server" Font-Size="X-Large" Font-Bold="true" /> <table> <tr> <td>Work:</td> <td><asp:TextBox ID="txtWork" Text="11" runat="server" /></td> </tr> <tr> <td>Eat:</td> <td><asp:TextBox ID="txtEat" text="2" runat="server" /></td> </tr> <tr> <td>Commute:</td> <td><asp:TextBox ID="txtCommute" Text="2" runat="server" /></td> </tr> <tr> <td>Watch TV:</td> <td><asp:TextBox ID="txtWatchTV" Text="2" runat="server" /></td> </tr> <tr> <td>Sleep:</td> <td><asp:TextBox ID="txtSleep" Text="7" runat="server" /></td> </tr> </table> <br /> <br /> <asp:Button ID="btn1" text="Draw 3D PieChart" runat="server" OnClick="btn1_Click" /> </form> </body> </html>

    Read the article

  • How do i mount my SD Card? I am using ubuntu 10.04

    - by shobhit
    root@shobhit:/media# lsusb Bus 002 Device 017: ID 14cd:125c Super Top Bus 002 Device 003: ID 0c45:6421 Microdia Bus 002 Device 002: ID 8087:0020 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 011: ID 413c:8160 Dell Computer Corp. Bus 001 Device 006: ID 413c:8162 Dell Computer Corp. Bus 001 Device 005: ID 413c:8161 Dell Computer Corp. Bus 001 Device 004: ID 138a:0008 DigitalPersona, Inc Bus 001 Device 003: ID 0a5c:4500 Broadcom Corp. BCM2046B1 USB 2.0 Hub (part of BCM2046 Bluetooth) Bus 001 Device 002: ID 8087:0020 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub root@shobhit:/home/shobhit/scripts/internalUtilities# sudo lspci -v -nn 00:1a.0 USB Controller [0c03]: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [8086:3b3c] (rev 06) (prog-if 20) Subsystem: Dell Device [1028:0441] Flags: bus master, medium devsel, latency 0, IRQ 16 Memory at fbc08000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port: BAR=1 offset=00a0 Capabilities: [98] PCIe advanced features <?> Kernel driver in use: ehci_hcd 00:1d.0 USB Controller [0c03]: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [8086:3b34] (rev 06) (prog-if 20) Subsystem: Dell Device [1028:0441] Flags: bus master, medium devsel, latency 0, IRQ 23 Memory at fbc07000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port: BAR=1 offset=00a0 Capabilities: [98] PCIe advanced features <?> Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev a6) (prog-if 01) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=20, subordinate=20, sec-latency=32 Capabilities: [50] Subsystem: Dell Device [1028:0441] 00:1f.0 ISA bridge [0601]: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller [8086:3b0b] (rev 06) Subsystem: Dell Device [1028:0441] Flags: bus master, medium devsel, latency 0 Capabilities: [e0] Vendor Specific Information <?> Kernel modules: iTCO_wdt 00:1f.2 SATA controller [0106]: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller [8086:3b2f] (rev 06) (prog-if 01) Subsystem: Dell Device [1028:0441] Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 29 I/O ports at f070 [size=8] I/O ports at f060 [size=4] I/O ports at f050 [size=8] I/O ports at f040 [size=4] I/O ports at f020 [size=32] Memory at fbc06000 (32-bit, non-prefetchable) [size=2K] Capabilities: [80] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable+ Capabilities: [70] Power Management version 3 Capabilities: [a8] SATA HBA <?> Capabilities: [b0] PCIe advanced features <?> Kernel driver in use: ahci Kernel modules: ahci 00:1f.3 SMBus [0c05]: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller [8086:3b30] (rev 06) Subsystem: Dell Device [1028:0441] Flags: medium devsel, IRQ 3 Memory at fbc05000 (64-bit, non-prefetchable) [size=256] I/O ports at f000 [size=32] Kernel modules: i2c-i801 00:1f.6 Signal processing controller [1180]: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem [8086:3b32] (rev 06) Subsystem: Dell Device [1028:0441] Flags: bus master, fast devsel, latency 0, IRQ 3 Memory at fbc04000 (64-bit, non-prefetchable) [size=4K] Capabilities: [50] Power Management version 3 Capabilities: [80] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable- 12:00.0 Network controller [0280]: Broadcom Corporation Device [14e4:4727] (rev 01) Subsystem: Dell Device [1028:0010] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at fbb00000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information <?> Capabilities: [48] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number cb-c0-8b-ff-ff-38-00-00 Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl 13:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 03) Subsystem: Dell Device [1028:0441] Flags: bus master, fast devsel, latency 0, IRQ 28 I/O ports at e000 [size=256] Memory at d0b04000 (64-bit, prefetchable) [size=4K] Memory at d0b00000 (64-bit, prefetchable) [size=16K] Expansion ROM at fba00000 [disabled] [size=128K] Capabilities: [40] Power Management version 3 Capabilities: [50] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [ac] MSI-X: Enable- Mask- TabSize=4 Capabilities: [cc] Vital Product Data <?> Capabilities: [100] Advanced Error Reporting <?> Capabilities: [140] Virtual Channel <?> Capabilities: [160] Device Serial Number 00-e0-4c-68-00-00-00-03 Kernel driver in use: r8169 Kernel modules: r8169 root@shobhit:/home/shobhit/scripts/internalUtilities# sudo lshw shobhit description: Portable Computer product: Vostro 3500 vendor: Dell Inc. version: A10 serial: FV1L3N1 width: 32 bits capabilities: smbios-2.6 dmi-2.6 smp-1.4 smp configuration: boot=normal chassis=portable cpus=2 uuid=44454C4C-5600-1031-804C-C6C04F334E31 *-core description: Motherboard product: 0G2R51 vendor: Dell Inc. physical id: 0 version: A10 serial: .FV1L3N1.CN7016612H00PW. slot: To Be Filled By O.E.M. *-cpu:0 description: CPU product: Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: 6.5.5 serial: 0002-0655-0000-0000-0000-0000 slot: CPU 1 size: 1197MHz capacity: 2926MHz width: 64 bits clock: 533MHz capabilities: boot fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp x86-64 constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid cpufreq configuration: id=4 *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 64KiB capacity: 64KiB capabilities: internal write-back unified *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 512KiB capacity: 512KiB capabilities: internal varies unified *-cache:2 description: L3 cache physical id: 7 slot: L3-Cache size: 3MiB capacity: 3MiB capabilities: internal varies unified *-logicalcpu:0 description: Logical CPU physical id: 4.1 width: 64 bits capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 4.2 width: 64 bits capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 4.3 width: 64 bits capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 4.4 width: 64 bits capabilities: logical *-logicalcpu:4 description: Logical CPU physical id: 4.5 width: 64 bits capabilities: logical *-logicalcpu:5 description: Logical CPU physical id: 4.6 width: 64 bits capabilities: logical *-logicalcpu:6 description: Logical CPU physical id: 4.7 width: 64 bits capabilities: logical *-logicalcpu:7 description: Logical CPU physical id: 4.8 width: 64 bits capabilities: logical *-logicalcpu:8 description: Logical CPU physical id: 4.9 width: 64 bits capabilities: logical *-logicalcpu:9 description: Logical CPU physical id: 4.a width: 64 bits capabilities: logical *-logicalcpu:10 description: Logical CPU physical id: 4.b width: 64 bits capabilities: logical *-logicalcpu:11 description: Logical CPU physical id: 4.c width: 64 bits capabilities: logical *-logicalcpu:12 description: Logical CPU physical id: 4.d width: 64 bits capabilities: logical *-logicalcpu:13 description: Logical CPU physical id: 4.e width: 64 bits capabilities: logical *-logicalcpu:14 description: Logical CPU physical id: 4.f width: 64 bits capabilities: logical *-logicalcpu:15 description: Logical CPU physical id: 4.10 width: 64 bits capabilities: logical *-memory description: System Memory physical id: 1d slot: System board or motherboard size: 3GiB *-bank:0 description: DIMM Synchronous 1333 MHz (0.8 ns) product: HMT112S6TFR8C-H9 vendor: AD80 physical id: 0 serial: 5525C935 slot: DIMM_A size: 1GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: DIMM Synchronous 1333 MHz (0.8 ns) product: HMT125S6TFR8C-H9 vendor: AD80 physical id: 1 serial: 3441D6CA slot: DIMM_B size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-firmware description: BIOS vendor: Dell Inc. physical id: 0 version: A10 (10/25/2010) size: 64KiB capacity: 1984KiB capabilities: mca pci upgrade shadowing escd cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer int10video acpi usb zipboot biosbootspecification *-cpu:1 physical id: 1 bus info: cpu@1 version: 6.5.5 serial: 0002-0655-0000-0000-0000-0000 size: 1197MHz capacity: 1197MHz capabilities: vmx ht cpufreq configuration: id=4 *-logicalcpu:0 description: Logical CPU physical id: 4.1 capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 4.2 capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 4.3 capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 4.4 capabilities: logical *-logicalcpu:4 description: Logical CPU physical id: 4.5 capabilities: logical *-logicalcpu:5 description: Logical CPU physical id: 4.6 capabilities: logical *-logicalcpu:6 description: Logical CPU physical id: 4.7 capabilities: logical *-logicalcpu:7 description: Logical CPU physical id: 4.8 capabilities: logical *-logicalcpu:8 description: Logical CPU physical id: 4.9 capabilities: logical *-logicalcpu:9 description: Logical CPU physical id: 4.a capabilities: logical *-logicalcpu:10 description: Logical CPU physical id: 4.b capabilities: logical *-logicalcpu:11 description: Logical CPU physical id: 4.c capabilities: logical *-logicalcpu:12 description: Logical CPU physical id: 4.d capabilities: logical *-logicalcpu:13 description: Logical CPU physical id: 4.e capabilities: logical *-logicalcpu:14 description: Logical CPU physical id: 4.f capabilities: logical *-logicalcpu:15 description: Logical CPU physical id: 4.10 capabilities: logical *-pci description: Host bridge product: Core Processor DRAM Controller vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 18 width: 32 bits clock: 33MHz configuration: driver=agpgart-intel resources: irq:0 *-display description: VGA compatible controller product: Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 18 width: 64 bits clock: 33MHz capabilities: msi pm bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:30 memory:fac00000-faffffff memory:c0000000-cfffffff(prefetchable) ioport:f080(size=8) *-communication UNCLAIMED description: Communication controller product: 5 Series/3400 Series Chipset HECI Controller vendor: Intel Corporation physical id: 16 bus info: pci@0000:00:16.0 version: 06 width: 64 bits clock: 33MHz capabilities: pm msi bus_master cap_list configuration: latency=0 resources: memory:fbc09000-fbc0900f *-usb:0 description: USB Controller product: 5 Series/3400 Series Chipset USB2 Enhanced Host Controller vendor: Intel Corporation physical id: 1a bus info: pci@0000:00:1a.0 version: 06 width: 32 bits clock: 33MHz capabilities: pm debug bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:16 memory:fbc08000-fbc083ff *-multimedia description: Audio device product: 5 Series/3400 Series Chipset High Definition Audio vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 06 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:22 memory:fbc00000-fbc03fff *-pci:0 description: PCI bridge product: 5 Series/3400 Series Chipset PCI Express Root Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: 06 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm bus_master cap_list configuration: driver=pcieport resources: irq:24 ioport:2000(size=4096) memory:bc000000-bc1fffff memory:bc200000-bc3fffff(prefetchable) *-pci:1 description: PCI bridge product: 5 Series/3400 Series Chipset PCI Express Root Port 2 vendor: Intel Corporation physical id: 1c.1 bus info: pci@0000:00:1c.1 version: 06 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm bus_master cap_list configuration: driver=pcieport resources: irq:25 ioport:3000(size=4096) memory:fbb00000-fbbfffff memory:bc400000-bc5fffff(prefetchable) *-network description: Wireless interface product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:12:00.0 logical name: eth1 version: 01 serial: c0:cb:38:8b:aa:d8 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.60.48.36 ip=10.0.1.50 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:fbb00000-fbb03fff *-pci:2 description: PCI bridge product: 5 Series/3400 Series Chipset PCI Express Root Port 3 vendor: Intel Corporation physical id: 1c.2 bus info: pci@0000:00:1c.2 version: 06 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm bus_master cap_list configuration: driver=pcieport resources: irq:26 ioport:e000(size=4096) memory:fba00000-fbafffff ioport:d0b00000(size=1048576) *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:13:00.0 logical name: eth0 version: 03 serial: 78:2b:cb:cc:0e:2a size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:28 ioport:e000(size=256) memory:d0b04000-d0b04fff(prefetchable) memory:d0b00000-d0b03fff(prefetchable) memory:fba00000-fba1ffff(prefetchable) *-pci:3 description: PCI bridge product: 5 Series/3400 Series Chipset PCI Express Root Port 5 vendor: Intel Corporation physical id: 1c.4 bus info: pci@0000:00:1c.4 version: 06 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm bus_master cap_list configuration: driver=pcieport resources: irq:27 ioport:d000(size=4096) memory:fb000000-fb9fffff ioport:d0000000(size=10485760) *-usb:1 description: USB Controller product: 5 Series/3400 Series Chipset USB2 Enhanced Host Controller vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 06 width: 32 bits clock: 33MHz capabilities: pm debug bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:23 memory:fbc07000-fbc073ff *-pci:4 description: PCI bridge product: 82801 Mobile PCI Bridge vendor: Intel Corporation physical id: 1e bus info: pci@0000:00:1e.0 version: a6 width: 32 bits clock: 33MHz capabilities: pci bus_master cap_list *-isa description: ISA bridge product: Mobile 5 Series Chipset LPC Interface Controller vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 06 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: latency=0 *-storage description: SATA controller product: 5 Series/3400 Series Chipset 6 port SATA AHCI Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 logical name: scsi0 logical name: scsi1 version: 06 width: 32 bits clock: 66MHz capabilities: storage msi pm bus_master cap_list emulated configuration: driver=ahci latency=0 resources: irq:29 ioport:f070(size=8) ioport:f060(size=4) ioport:f050(size=8) ioport:f040(size=4) ioport:f020(size=32) memory:fbc06000-fbc067ff *-disk description: ATA Disk product: WDC WD3200BEKT-7 vendor: Western Digital physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WX21AC0W1945 size: 298GiB (320GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=77e3ed41 *-volume:0 description: Windows NTFS volume physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 version: 3.1 serial: aa69-51c0 size: 98MiB capacity: 100MiB capabilities: primary bootable ntfs initialized configuration: clustersize=4096 created=2012-04-03 02:00:15 filesystem=ntfs label=System Reserved state=clean *-volume:1 description: Windows NTFS volume physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 version: 3.1 serial: 9854ff5c-1dea-a147-84a6-624e758f44b8 size: 48GiB capacity: 48GiB capabilities: primary ntfs initialized configuration: clustersize=4096 created=2012-04-10 13:55:31 filesystem=ntfs modified_by_chkdsk=true mounted_on_nt4=true resize_log_file=true state=dirty upgrade_on_mount=true *-volume:2 description: Extended partition physical id: 3 bus info: scsi@0:0.0.0,3 logical name: /dev/sda3 size: 48GiB capacity: 48GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume:0 description: Linux swap / Solaris partition physical id: 5 logical name: /dev/sda5 capacity: 1952MiB capabilities: nofs *-logicalvolume:1 description: Linux filesystem partition physical id: 6 logical name: /dev/sda6 logical name: / capacity: 46GiB configuration: mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,barrier=1,data=ordered state=mounted *-volume:3 description: Windows NTFS volume physical id: 4 bus info: scsi@0:0.0.0,4 logical name: /dev/sda4 logical name: /media/56AA8094AA807273 version: 3.1 serial: 22a29e8d-56c7-9a4a-adea-528103948f6d size: 200GiB capacity: 200GiB capabilities: primary ntfs initialized configuration: clustersize=4096 created=2012-04-02 20:17:15 filesystem=ntfs modified_by_chkdsk=true mount.fstype=fuseblk mount.options=rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 mounted_on_nt4=true resize_log_file=true state=mounted upgrade_on_mount=true *-cdrom description: DVD-RAM writer product: DVD+-RW TS-L633J vendor: TSSTcorp physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd0 logical name: /dev/sr0 version: D200 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-serial UNCLAIMED description: SMBus product: 5 Series/3400 Series Chipset SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 06 width: 64 bits clock: 33MHz configuration: latency=0 resources: memory:fbc05000-fbc050ff ioport:f000(size=32) *-generic UNCLAIMED description: Signal processing controller product: 5 Series/3400 Series Chipset Thermal Subsystem vendor: Intel Corporation physical id: 1f.6 bus info: pci@0000:00:1f.6 version: 06 width: 64 bits clock: 33MHz capabilities: pm msi bus_master cap_list configuration: latency=0 resources: memory:fbc04000-fbc04fff *-scsi physical id: 2 bus info: usb@2:1.1 logical name: scsi15 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk description: SCSI Disk physical id: 0.0.0 bus info: scsi@15:0.0.0 logical name: /dev/sdb I have tried all options like fdisk /dev/sdb , pmount /dev/sdb but nothing is working .Pls guide me

    Read the article

  • DevConnections Session Slides, Samples and Links

    - by Rick Strahl
    Finally coming up for air this week, after catching up with being on the road for the better part of three weeks. Here are my slides, samples and links for my four DevConnections Session two weeks ago in Vegas. I ended up doing one extra un-prepared for session on WebAPI and AJAX, as some of the speakers were either delayed or unable to make it at all to Vegas due to Sandy's mayhem. It was pretty hectic in the speaker room as Erik (our event coordinator extrodinaire) was scrambling to fill session slots with speakers :-). Surprisingly it didn't feel like the storm affected attendance drastically though, but I guess it's hard to tell without actual numbers. The conference was a lot of fun - it's been a while since I've been speaking at one of these larger conferences. I'd been taking a hiatus, and I forgot how much I enjoy actually giving talks. Preparing - well not  quite so much, especially since I ended up essentially preparing or completely rewriting for all three of these talks and I was stressing out a bit as I was sick the week before the conference and didn't get as much time to prepare as I wanted to. But - as always seems to be the case - it all worked out, but I guess those that attended have to be the judge of that… It was great to catch up with my speaker friends as well - man I feel out of touch. I got to spend a bunch of time with Dan Wahlin, Ward Bell, Julie Lerman and for about 10 minutes even got to catch up with the ever so busy Michele Bustamante. Lots of great technical discussions including a fun and heated REST controversy with Ward and Howard Dierking. There were also a number of great discussions with attendees, describing how they're using the technologies touched in my talks in live applications. I got some great ideas from some of these and I wish there would have been more opportunities for these kinds of discussions. One thing I miss at these Vegas events though is some sort of coherent event where attendees and speakers get to mingle. These Vegas conferences are just like "go to sessions, then go out and PARTY on the town" - it's Vegas after all! But I think that it's always nice to have at least one evening event where everybody gets to hang out together and trade stories and geek talk. Overall there didn't seem to be much opportunity for that beyond lunch or the small and short exhibit hall events which it seemed not many people actually went to. Anyways, a good time was had. I hope those of you that came to my sessions learned something useful. There were lots of great questions and discussions after the sessions - always appreciate hearing the real life scenarios that people deal with in relation to the abstracted scenarios in sessions. Here are the Session abstracts, a few comments and the links for downloading slides and  samples. It's not quite like being there, but I hope this stuff turns out to be useful to some of you. I'll be following up a couple of these sessions with white papers in the following weeks. Enjoy. ASP.NET Architecture: How ASP.NET Works at the Low Level Abstract:Interested in how ASP.NET works at a low level? ASP.NET is extremely powerful and flexible technology, but it's easy to forget about the core framework that underlies the higher level technologies like ASP.NET MVC, WebForms, WebPages, Web Services that we deal with on a day to day basis. The ASP.NET core drives all the higher level handlers and frameworks layered on top of it and with the core power comes some complexity in the form of a very rich object model that controls the flow of a request through the ASP.NET pipeline from Windows HTTP services down to the application level. To take full advantage of it, it helps to understand the underlying architecture and model. This session discusses the architecture of ASP.NET along with a number of useful tidbits that you can use for building and debugging your ASP.NET applications more efficiently. We look at overall architecture, how requests flow from the IIS (7 and later) Web Server to the ASP.NET runtime into HTTP handlers, modules and filters and finally into high-level handlers like MVC, Web Forms or Web API. Focus of this session is on the low-level aspects on the ASP.NET runtime, with examples that demonstrate the bootstrapping of ASP.NET, threading models, how Application Domains are used, startup bootstrapping, how configuration files are applied and how all of this relates to the applications you write either using low-level tools like HTTP handlers and modules or high-level pages or services sitting at the top of the ASP.NET runtime processing chain. Comments:I was surprised to see so many people show up for this session - especially since it was the last session on the last day and a short 1 hour session to boot. The room was packed and it was to see so many people interested the abstracts of architecture of ASP.NET beyond the immediate high level application needs. Lots of great questions in this talk as well - I only wish this session would have been the full hour 15 minutes as we just a little short of getting through the main material (didn't make it to Filters and Error handling). I haven't done this session in a long time and I had to pretty much re-figure all the system internals having to do with the ASP.NET bootstrapping in light for the changes that came with IIS 7 and later. The last time I did this talk was with IIS6, I guess it's been a while. I love doing this session, mainly because in my mind the core of ASP.NET overall is so cleanly designed to provide maximum flexibility without compromising performance that has clearly stood the test of time in the 10 years or so that .NET has been around. While there are a lot of moving parts, the technology is easy to manage once you understand the core components and the core model hasn't changed much even while the underlying architecture that drives has been almost completely revamped especially with the introduction of IIS 7 and later. Download Samples and Slides   Introduction to using jQuery with ASP.NET Abstract:In this session you'll learn how to take advantage of jQuery in your ASP.NET applications. Starting with an overview of jQuery client features via many short and fun examples, you'll find out about core features like the power of selectors for document element selection, manipulating these elements with jQuery's wrapped set methods in a browser independent way, how to hook up and handle events easily and generally apply concepts of unobtrusive JavaScript principles to client scripting. The second half of the session then delves into jQuery's AJAX features and several different ways how you can interact with ASP.NET on the server. You'll see examples of using ASP.NET MVC for serving HTML and JSON AJAX content, as well as using the new ASP.NET Web API to serve JSON and hypermedia content. You'll also see examples of client side templating/databinding with Handlebars and Knockout. Comments:This session was in a monster of a room and to my surprise it was nearly packed, given that this was a 100 level session. I can see that it's a good idea to continue to do intro sessions to jQuery as there appeared to be quite a number of folks who had not worked much with jQuery yet and who most likely could greatly benefit from using it. Seemed seemed to me the session got more than a few people excited to going if they hadn't yet :-).  Anyway I just love doing this session because it's mostly live coding and highly interactive - not many sessions that I can build things up from scratch and iterate on in an hour. jQuery makes that easy though. Resources: Slides and Code Samples Introduction to jQuery White Paper Introduction to ASP.NET Web API   Hosting the Razor Scripting Engine in Your Own Applications Abstract:The Razor Engine used in ASP.NET MVC and ASP.NET Web Pages is a free-standing scripting engine that can be disassociated from these Web-specific implementations and can be used in your own applications. Razor allows for a powerful mix of code and text rendering that makes it a wonderful tool for any sort of text generation, from creating HTML output in non-Web applications, to rendering mail merge-like functionality, to code generation for developer tools and even as a plug-in scripting engine. In this session, we'll look at the components that make up the Razor engine and how you can bootstrap it in your own applications to hook up templating. You'll find out how to create custom templates and manage Razor requests that can be pre-compiled, detecting page changes and act in ways similar to a full runtime. We look at ways that you can pass data into the engine and retrieve both the rendered output as well as result values in a package that makes it easy to plug Razor into your own applications. Comments:That this session was picked was a bit of a surprise to me, since it's a bit of a niche topic. Even more of a surprise was that during the session quite a few people who attended had actually used Razor externally and were there to find out more about how the process works and how to extend it. In the session I talk a bit about a custom Razor hosting implementation (Westwind.RazorHosting) and drilled into the various components required to build a custom Razor Hosting engine and a runtime around it. This sessions was a bit of a chore to prepare for as there are lots of technical implementation details that needed to be dealt with and squeezing that into an hour 15 is a bit tight (and that aren't addressed even by some of the wrapper libraries that exist). Found out though that there's quite a bit of interest in using a templating engine outside of web applications, or often side by side with the HTML output generated by frameworks like MVC or WebForms. An extra fun part of this session was that this was my first session and when I went to set up I realized I forgot my mini-DVI to VGA adapter cable to plug into the projector in my room - 6 minutes before the session was about to start. So I ended up sprinting the half a mile + back to my room - and back at a full sprint. I managed to be back only a couple of minutes late, but when I started I was out of breath for the first 10 minutes or so, while trying to talk. Musta sounded a bit funny as I was trying to not gasp too much :-) Resources: Slides and Code Samples Westwind.RazorHosting GitHub Project Original RazorHosting Blog Post   Introduction to ASP.NET Web API for AJAX Applications Abstract:WebAPI provides a new framework for creating REST based APIs, but it can also act as a backend to typical AJAX operations. This session covers the core features of Web API as it relates to typical AJAX application development. We’ll cover content-negotiation, routing and a variety of output generation options as well as managing data updates from the client in the context of a small Single Page Application style Web app. Finally we’ll look at some of the extensibility features in WebAPI to customize and extend Web API in a number and useful useful ways. Comments:This session was a fill in for session slots not filled due MIA speakers stranded by Sandy. I had samples from my previous Web API article so decided to go ahead and put together a session from it. Given that I spent only a couple of hours preparing and putting slides together I was glad it turned out as it did - kind of just ran itself by way of the examples I guess as well as nice audience interactions and questions. Lots of interest - and also some confusion about when Web API makes sense. Both this session and the jQuery session ended up getting a ton of questions about when to use Web API vs. MVC, whether it would make sense to switch to Web API for all AJAX backend work etc. In my opinion there's no need to jump to Web API for existing applications that already have a good AJAX foundation. Web API is awesome for real externally consumed APIs and clearly defined application AJAX APIs. For typical application level AJAX calls, it's still a good idea, but ASP.NET MVC can serve most if not all of that functionality just as well. There's no need to abandon MVC (or even ASP.NET AJAX or third party AJAX backends) just to move to Web API. For new projects Web API probably makes good sense for isolation of AJAX calls, but it really depends on how the application is set up. In some cases sharing business logic between the HTML and AJAX interfaces with a single MVC API can be cleaner than creating two completely separate code paths to serve essentially the same business logic. Resources: Slides and Code Samples Sample Code on GitHub Introduction to ASP.NET Web API White Paper© Rick Strahl, West Wind Technologies, 2005-2012Posted in Conferences  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • [MISC GEEKERY] Support for Some Versions of Windows is Ending

    - by Matthew Guay
    Are you sticking with your older version of Windows instead of upgrading to Windows 7?  There’s no problem with that, but here’s a quick reminder to make sure you’re running the latest service pack to stay protected. Microsoft offers security updates and more throughout the lifetime of a version of Windows, and periodically they roll all the latest updates and improvements together into a service pack.  After a while, only computers running the latest service pack will still get updates to keep them safe. Recently, Microsoft has been warning that support is ending for Windows XP with Service Pack 2 and the release version of Windows Vista.  When support ends, you will not receive any new security updates for Windows.  You can continue to use your computer the same as before, but it may not be as secure and if new security issues are discovered they will not be updated. However, it’s easy to stay supported: simply install XP Service Pack 3 or Vista Service Pack 2, depending on your computer.  Here’s how to do that: Windows XP To install Windows XP Service Pack 3, you can either check Windows Update for updates, or simply download it from Microsoft at this link: Download XP Service Pack 3 Run the download (or if you’re updating from Windows Update the installer will automatically launch), and proceed just as you normally would when installing a program.  Your computer will have to reboot during the install, so make sure you’ve saved all your work and closed other programs before installing.   To check what service pack your computer is running, click Start, then right-click on the My Computer button and choose Properties. This will show you what version and service pack of Windows you are running, and in this screenshot we see this computer has be updated to Service Pack 3. Please Note:  The version of XP shipped with Windows XP Mode in Windows 7 comes preconfigured with Service Pack 3, and does not need updated.  Additionally, if your computer is running the 64 bit version of Windows XP, then Service Pack 2 is the latest service pack for your computer, and it is still supported. Windows Vista If your computer is running Windows Vista, you can install Service Pack 2 to stay up to date and supported.  Simply check Windows Update for Service Pack 2 if you haven’t installed it yet, or download the installer for your computer from the link below: 32 bit: Vista Service Pack 2 32-bit 64 bit: Vista Service Pack 2 64-bit Run the installer, and simply set it up as a normal program installation.  Do note that your computer will reboot during the installation, so make sure to save your work and close other programs before installing. To see what service pack your computer is running, click the Start orb, then right-click on the Computer button and select Properties. This will show what service pack and edition of Windows Vista your computer is running right at the top of the page. Conclusion Microsoft makes it easy to keep using your computer safely and securely even if you choose to keep using your older version of Windows.  By installing the latest service pack, you will make sure that your computer will be supported for years to come.  Windows 7 users, you don’t need to worry; no service has been released for it yet.  Stay tuned, and we’ll let you know when any new service packs are available. www.microsoft.com/EOS – End of Support Information from Microsoft Similar Articles Productive Geek Tips Remove Optional and Probably Unnecessary Windows Vista ComponentsRequesting Hotfixes from Microsoft the Easy WayUnderstanding Windows Vista Aero Glass RequirementsAdd Network Support to Windows Live MovieMakerCustomize the Manufacturer Support Info in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7?

    Read the article

  • Windows Phone 8, possible tablets and what the latest update might mean

    - by Roger Hart
    Microsoft have just announced an update to Windows Phone 8. As one of the five, maybe six people who actually bought a WP8 handset I found this interesting. Then I read the blog post about it, and rushed off to write somewhat less than a thousand words about a single picture. The blog post announces an extra column of tiles on the start screen, and support for higher resolutions. If we ignore all the usual flummery about how this will make your life better, that (and the rotation lock) sounds a little like stage setting for tablets. Looking at the preview screenshot, I started to wonder. What it’s called Phablet_5F00_StartScreenProductivity_5F00_01_5F00_072A1240.jpg Pretty conclusive. If you can brand something a “phablet” and sleep at night you’re made of sterner stuff than I am, but that’s beside the point. It’s explicit in the post that Microsoft are expecting a broader range of form factors for WP8, but they stop short of quite calling out tablet size. The extra columns and resolution definitely back that up, so why stop at a 6 inch “phablet”? Sadly, the string of numbers there don’t really look like a Lumia model number – that would be a bit tendentious even for a speculative blog post about a single screenshot. “Productivity” is interesting too. I get into this a bit more below, but this is a pretty clear pitch for a business device. What it looks like Something that would look quite decent on a 7 inch screen, but something a bit too vertical to go toe-to-toe with the Surface. Certainly, it would look a lot better on a large-factor phone than any of the current models. Those tiles are going to get cramped and a bit ugly if the handsets aren’t getting bigger. What’s on it You have a bunch of missed calls, you rarely text, use a stocks app, and your budget spreadsheet and meeting notes are a thumb-reach away. Outlook is your main form of email. You care enough about LinkedIn to not only install its app but give it a huge live tile. There’s no beating about the bush here, the implicit persona is a corporate exec. With Nokia in the bag and Blackberry pushing daisies, that may not be a stupid play. There’s almost certainly a niche there if they can parlay their corporate credentials into filling it before BYOD (which functionally means an iPhone) reaches the late adopters. The really quite slick WP8 Office implementation ought to help here. This is the face they’ve chosen to present, the cultural milieu they’re normalizing for Windows Phone. It’s an iPhone for Serious Business Grown-ups. Could work, I guess. Does it mean anything? Is the latest WP8 update a sign that we can expect to see tablets running Windows Phone rather than WinRT? Well, WinRT tablets haven’t exactly taken off but I’m not quite going to make a leap like that just from a file name and a column of icons. I feel pretty safe, however, conjecturing that Microsoft would like to squeeze a WP8 “phablet” into the palm of every exec who’s ever grumbled about their Blackberry, and this release might get them a bit closer. If it works well incrementing up to larger devices, then that could be a fair hedge against WinRt crashing and burning any harder in the marketplace.

    Read the article

  • Is the Leptonica implementation of 'Modified Median Cut' not using the median at all?

    - by TheCodeJunkie
    I'm playing around a bit with image processing and decided to read up on how color quantization worked and after a bit of reading I found the Modified Median Cut Quantization algorithm. I've been reading the code of the C implementation in Leptonica library and came across something I thought was a bit odd. Now I want to stress that I am far from an expert in this area, not am I a math-head, so I am predicting that this all comes down to me not understanding all of it and not that the implementation of the algorithm is wrong at all. The algorithm states that the vbox should be split along the lagest axis and that it should be split using the following logic The largest axis is divided by locating the bin with the median pixel (by population), selecting the longer side, and dividing in the center of that side. We could have simply put the bin with the median pixel in the shorter side, but in the early stages of subdivision, this tends to put low density clusters (that are not considered in the subdivision) in the same vbox as part of a high density cluster that will outvote it in median vbox color, even with future median-based subdivisions. The algorithm used here is particularly important in early subdivisions, and 3is useful for giving visible but low population color clusters their own vbox. This has little effect on the subdivision of high density clusters, which ultimately will have roughly equal population in their vboxes. For the sake of the argument, let's assume that we have a vbox that we are in the process of splitting and that the red axis is the largest. In the Leptonica algorithm, on line 01297, the code appears to do the following Iterate over all the possible green and blue variations of the red color For each iteration it adds to the total number of pixels (population) it's found along the red axis For each red color it sum up the population of the current red and the previous ones, thus storing an accumulated value, for each red note: when I say 'red' I mean each point along the axis that is covered by the iteration, the actual color may not be red but contains a certain amount of red So for the sake of illustration, assume we have 9 "bins" along the red axis and that they have the following populations 4 8 20 16 1 9 12 8 8 After the iteration of all red bins, the partialsum array will contain the following count for the bins mentioned above 4 12 32 48 49 58 70 78 86 And total would have a value of 86 Once that's done it's time to perform the actual median cut and for the red axis this is performed on line 01346 It iterates over bins and check they accumulated sum. And here's the part that throws me of from the description of the algorithm. It looks for the first bin that has a value that is greater than total/2 Wouldn't total/2 mean that it is looking for a bin that has a value that is greater than the average value and not the median ? The median for the above bins would be 49 The use of 43 or 49 could potentially have a huge impact on how the boxes are split, even though the algorithm then proceeds by moving to the center of the larger side of where the matched value was.. Another thing that puzzles me a bit is that the paper specified that the bin with the median value should be located, but does not mention how to proceed if there are an even number of bins.. the median would be the result of (a+b)/2 and it's not guaranteed that any of the bins contains that population count. So this is what makes me thing that there are some approximations going on that are negligible because of how the split actually takes part at the center of the larger side of the selected bin. Sorry if it got a bit long winded, but I wanted to be as thoroughas I could because it's been driving me nuts for a couple of days now ;)

    Read the article

  • Error when connecting to AS400 (ISeries)

    - by Jimmy Engtröm
    I'm trying to connect to a AS400 server using the .net classes. I have added a reference to IBM.Data.DB.iSeries and I use the following code: var conn = new iDB2Connection("DataSource=111.111.111.111;UserID=xxx;Password=xxx; DataCompression=True;"); conn.Open(); But I get the following exceptions Running 64 bit: "The provider cannot run in 64-bit mode." Running 32 bit: An unexpected exception occurred. Type: System.DllNotFoundException, Message: Unable to load DLL 'cwbdc.dll': The operating system cannot run . (Exception from HRESULT: 0x800700B6). I have uninstalled the Client Access and installed it again. The cwbdc.dll does exist in the system32 and syswow64 . I have no problem connecting to the AS400 if I use odbc. I'm running a 64 bit verion of Windows 7. Any ideas? /Jimmy

    Read the article

  • 12.04 LTS: unity --reset hangs

    - by Gregory R. Pace
    Nearly each time I reboot my machine, the system panel and integrated app menus fail to load. At a terminal, when issuing 'unity --reset', I get the following errors: ... Initializing widget options...done Initializing winrules options...done Initializing wobbly options...done ERROR 2012-11-05 04:36:48 unity.glib-gobject <unknown>:0 g_object_unref: assertion `G_IS_OBJECT (object)' failed ERROR 2012-11-05 04:36:48 unity.gtk <unknown>:0 gtk_window_resize: assertion `width > 0' failed WARN 2012-11-05 04:37:14 unity <unknown>:0 Unable to fetch children: No such interface `org.ayatana.bamf.view' on object at path /org/ayatana/bamf/application885622223 ERROR 2012-11-05 04:37:21 unity.glib-gobject <unknown>:0 g_object_set_qdata: assertion `G_IS_OBJECT (object)' failed Setting Update "main_menu_key" Setting Update "run_key" WARN 2012-11-05 04:38:06 unity.iconloader IconLoader.cpp:438 Unable to load icon stock-person at size 24 WARN 2012-11-05 04:38:26 unity.glib.dbusproxy GLibDBusProxy.cpp:182 Unable to connect to proxy: Error calling StartServiceByName for com.canonical.Unity.Lens.Applications: Timeout was reached WARN 2012-11-05 04:38:26 unity.glib.dbusproxy GLibDBusProxy.cpp:182 Unable to connect to proxy: Error calling StartServiceByName for com.canonical.Unity.Lens.Applications: Timeout was reached The procedure hangs at this point. Any ideas how to solve these problems ? Thanks in advance.

    Read the article

  • How to integerate Skype in Messaging Menu with Skype-Wrapper?

    - by Tahir Akram
    I cant see skype-wrapper in unity dash (alt f2). So I run it from terminal and attach it with skype. But it only appears in menu, when I run it from terminal like tahir@StoneCode:~$ skype-wrapper Starting skype-wrapper /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed import gobject._gobject INFO: Initializing Skype API INFO: Waiting for Skype Process INFO: Attaching skype-wrapper to Skype process INFO: Attached complete When I quit the terminal, skype disappear from messaging menu. So I need to run skype-wrapper instead of skype and need to add it in startup? Or any other work around? I followed this tutorial. Restart also does not help. Thanks.

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • What is difference between RegSvr and RegServer?

    - by Rahul
    Why there is difference in registering COM component for 32-bit and 64-bit? I mean at one point you have to use like this, RegSvr32 COM.exe or RegSvr32 COM.dll On 64-bit OS you have to use like, COM.exe /RegServer COM.exe /RegSvr Does /RegServer and /RegSvr are same or different. If different then what is the difference. Thanks in advance, Rahul

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • ATG Live Webcast Feb. 24th: Using the EBS 12 SOA Adapter

    - by Bill Sawyer
    Our next ATG Live Webcast is now open for registration. The event is titled:E-Business Suite R12.x SOA Using the E-Business Suite AdapterThis live one-hour webcast will offer a review of the Service Oriented Architecture (SOA) capabilities within E-Business Suite R12 focusing on the E-Business Suite Adapter. While primarily focused on integrators and developers, understanding SOA capabilities is important for all E-Business Suite technologists and superusers.ATG Live Webcast Logistics The one-hour event will be webcast live with a dial-in access for Q&A with the Applications Technology Group (ATG) Development experts presenting the event. The basic information for the event is as follows:E-Business Suite R12.x SOA Using the E-Business Suite AdapterDate: Thursday, February 24, 2011Time: 8:00 AM - 9:00 AM Pacific Standard TimePresenters:  Neeraj Chauhan, Product Manager, ATG DevelopmentNOTE: When you register for the event, the confirmation will show the event starting at 7:30 AM Pacific Standard Time. This is to allow you time to connect to the conference call and web conference. The presentation will start at 8:00 AM Pacfic Standard Time.

    Read the article

  • Linux Kernel not upgraded (from Ubuntu 12.04 to 12.10) - can't remove old kernels and can't install new apps

    - by Tony Breyal
    Question: How do I remove old kernel images which refuse to be removed? Context: Yesterday I upgraded Ubuntu from 12.04 to 12.10. However, the linux kernel has not upgraded from 3.2 to 3.5 as I would have expected. $ uname -r 3.2.0-32-generic $ uname -a Linux tony-b 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ cat /proc/version Linux version 3.2.0-32-generic (buildd@batsu) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 Not sure why that happened there. I wanted to install Audacity (v2.0.1-1_amd64) to edit a lecture audio file. When trying this operation through Ubuntu Software Center, it says that to install audacity, four items will need to be removed: linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic So I click "Install Anyway" but it fails with the following output: installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 259675 files and directories currently installed.) Removing linux-image-3.2.0-27-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-27-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-27-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-27-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing linux-image-3.2.0-29-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-29-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-29-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-29-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing linux-image-3.2.0-30-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-30-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-30-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-30-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing linux-image-3.2.0-31-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-31-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-31-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-31-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic Error in function: Setting up grub-pc (2.00-7ubuntu11) ... /usr/sbin/grub-bios-setup: warning: Sector 32 is already in use by the program `FlexNet'; avoiding it. This software may cause boot or other problems in future. Please ask its authors not to store data in the boot track. Installation finished. No error reported. Generating grub.cfg ... dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 1 It seems I need to remove the old linux images somehow. I have tried this through (1) Synaptic, (2) Ubuntu Tweak, and (3) Computer Janitor. The first two fail, whilst Computer Janitor won't even open. The output from Synaptic is: E: linux-image-3.2.0-27-generic: subprocess installed post-removal script returned error exit status 1 E: linux-image-3.2.0-29-generic: subprocess installed post-removal script returned error exit status 1 E: linux-image-3.2.0-30-generic: subprocess installed post-removal script returned error exit status 1 E: linux-image-3.2.0-31-generic: subprocess installed post-removal script returned error exit status 1 How do I remove these old images? Thank you kindly in advance for any help on this matter. P.S. Further information: $ dpkg --list | grep linux-image rH linux-image-3.2.0-27-generic 3.2.0-27.43 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP rH linux-image-3.2.0-29-generic 3.2.0-29.46 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP rH linux-image-3.2.0-30-generic 3.2.0-30.48 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP rH linux-image-3.2.0-31-generic 3.2.0-31.50 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-32-generic 3.2.0-32.51 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.5.0-17-generic 3.5.0-17.28 amd64 Linux kernel image for version 3.5.0 on 64 bit x86 SMP ii linux-image-extra-3.5.0-17-generic 3.5.0-17.28 amd64 Linux kernel image for version 3.5.0 on 64 bit x86 SMP ii linux-image-generic 3.5.0.17.19 amd64 Generic Linux kernel image But trying to remove using the command line fails too e.g.: $ sudo apt-get purge linux-image-3.2.0-27-generic Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic 0 upgraded, 0 newly installed, 4 to remove and 1 not upgraded. 5 not fully installed or removed. After this operation, 597 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 259675 files and directories currently installed.) Removing linux-image-3.2.0-27-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-27-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-27-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-27-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Removing linux-image-3.2.0-29-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-29-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-29-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-29-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Removing linux-image-3.2.0-30-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-30-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-30-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-30-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Removing linux-image-3.2.0-31-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-31-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-31-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-31-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Errors were encountered while processing: linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Are there any famous one-man-army programmers?

    - by DFectuoso
    Lately I have been learning of more and more programmers who think that if they were working alone, they would be faster and would deliver more quality. Usually that feeling is attached to a feeling that they do the best programming in their team and at the end of the day the idea is quite plausible. If they ARE doing the best programming, and worked alone (and more maybe) the final result would be a better piece of software. I know this idea would only work if you were passionate enough to work 24/7, on a deadline, with great discipline. So after considering the idea and trying to learn a little more, I wonder if there are famous one-man-army programmers that have delivered any (useful) software in the past?

    Read the article

  • Using 32bit COM object from C# or VBS on Vista 64bit and getting error 80004005

    - by alexandroid
    I need some mind reading here, since I am trying to do what I do not completely understand. There is a 32-bit application (electronic trading application called CQG) which provides a COM API for external access. I have sample programs and scripts which access this API from Excel, .NET (C++, VB and C#) and shell VBScript. I have these .NET applications as source code and as compiled executables (32-bit, compiled on Windows XP). Now I have Windows Vista Home 64-bit which makes my head to spin. Excel examples work just fine (in Excel 2003). Compiled .NET sample executables work as well. But when I am trying to run .NET C# sample converted to and compiled by Visual Studio C# Expression, or run the VBScript script, I am getting error 80004005 when trying to create an object. Initially the .NET application also gave me 80040154 but then I figured how to make it produce 32-bit code and not 64-bit, so now the errors in C# and VBScript applications are the same. That's all the progress I got for now. And yes, I tried running 32-bit versions of cscript.exe/WScript from SysWOW64 folder on my VBS, but still the result is the same (80004005). How to solve this problem? I am almost ready to believe it is practically impossible, but the fact that Excel VBA works and .NET executables compiled on Windows XP run just fine just makes me angry. There should be a way to beat this thing (some secret which probably only Windows Vista developers know)! I will appreciate any help! PS: I believe code samples do not make much sense here, but this is the line of VBScript which fails: Set CEL = WScript.CreateObject("CQG.CQGCEL.4.0", "CEL_") And this is C#: CQGCEL CEL = new CQGCEL(); Update: Forgot to say UAC is off, of course. And I am working from account with administrator priviledges. I also tried watching which registry keys are read using Process Monitor but everything looks OK for GUIDs of this object. I could not recognize some other GUIDs so I am not sure whether they were critical or not. Is there a chance that this COM object uses Internet Explorer and gets the wrong one (like Internet Explorer 7 instead of Internet Explorer 6 engine or something)?

    Read the article

  • String Format for DateTime in C#

    - by SAMIR BHOGAYTA
    String Format for DateTime [C#] This example shows how to format DateTime using String.Format method. All formatting can be done also using DateTime.ToString method. Custom DateTime Formatting There are following custom format specifiers y (year), M (month), d (day), h (hour 12), H (hour 24), m (minute), s (second), f (second fraction), F (second fraction, trailing zeroes are trimmed), t (P.M or A.M) and z (time zone). Following examples demonstrate how are the format specifiers rewritten to the output. [C#] // create date time 2008-03-09 16:05:07.123 DateTime dt = new DateTime(2008, 3, 9, 16, 5, 7, 123); String.Format("{0:y yy yyy yyyy}", dt); // "8 08 008 2008" year String.Format("{0:M MM MMM MMMM}", dt); // "3 03 Mar March" month String.Format("{0:d dd ddd dddd}", dt); // "9 09 Sun Sunday" day String.Format("{0:h hh H HH}", dt); // "4 04 16 16" hour 12/24 String.Format("{0:m mm}", dt); // "5 05" minute String.Format("{0:s ss}", dt); // "7 07" second String.Format("{0:f ff fff ffff}", dt); // "1 12 123 1230" sec.fraction String.Format("{0:F FF FFF FFFF}", dt); // "1 12 123 123" without zeroes String.Format("{0:t tt}", dt); // "P PM" A.M. or P.M. String.Format("{0:z zz zzz}", dt); // "-6 -06 -06:00" time zone You can use also date separator / (slash) and time sepatator : (colon). These characters will be rewritten to characters defined in the current DateTimeForma­tInfo.DateSepa­rator and DateTimeForma­tInfo.TimeSepa­rator. [C#] // date separator in german culture is "." (so "/" changes to ".") String.Format("{0:d/M/yyyy HH:mm:ss}", dt); // "9/3/2008 16:05:07" - english (en-US) String.Format("{0:d/M/yyyy HH:mm:ss}", dt); // "9.3.2008 16:05:07" - german (de-DE) Here are some examples of custom date and time formatting: [C#] // month/day numbers without/with leading zeroes String.Format("{0:M/d/yyyy}", dt); // "3/9/2008" String.Format("{0:MM/dd/yyyy}", dt); // "03/09/2008" // day/month names String.Format("{0:ddd, MMM d, yyyy}", dt); // "Sun, Mar 9, 2008" String.Format("{0:dddd, MMMM d, yyyy}", dt); // "Sunday, March 9, 2008" // two/four digit year String.Format("{0:MM/dd/yy}", dt); // "03/09/08" String.Format("{0:MM/dd/yyyy}", dt); // "03/09/2008" Standard DateTime Formatting In DateTimeForma­tInfo there are defined standard patterns for the current culture. For example property ShortTimePattern is string that contains value h:mm tt for en-US culture and value HH:mm for de-DE culture. Following table shows patterns defined in DateTimeForma­tInfo and their values for en-US culture. First column contains format specifiers for the String.Format method. Specifier DateTimeFormatInfo property Pattern value (for en-US culture) t ShortTimePattern h:mm tt d ShortDatePattern M/d/yyyy T LongTimePattern h:mm:ss tt D LongDatePattern dddd, MMMM dd, yyyy f (combination of D and t) dddd, MMMM dd, yyyy h:mm tt F FullDateTimePattern dddd, MMMM dd, yyyy h:mm:ss tt g (combination of d and t) M/d/yyyy h:mm tt G (combination of d and T) M/d/yyyy h:mm:ss tt m, M MonthDayPattern MMMM dd y, Y YearMonthPattern MMMM, yyyy r, R RFC1123Pattern ddd, dd MMM yyyy HH':'mm':'ss 'GMT' (*) s SortableDateTi­mePattern yyyy'-'MM'-'dd'T'HH':'mm':'ss (*) u UniversalSorta­bleDateTimePat­tern yyyy'-'MM'-'dd HH':'mm':'ss'Z' (*) (*) = culture independent Following examples show usage of standard format specifiers in String.Format method and the resulting output. [C#] String.Format("{0:t}", dt); // "4:05 PM" ShortTime String.Format("{0:d}", dt); // "3/9/2008" ShortDate String.Format("{0:T}", dt); // "4:05:07 PM" LongTime String.Format("{0:D}", dt); // "Sunday, March 09, 2008" LongDate String.Format("{0:f}", dt); // "Sunday, March 09, 2008 4:05 PM" LongDate+ShortTime String.Format("{0:F}", dt); // "Sunday, March 09, 2008 4:05:07 PM" FullDateTime String.Format("{0:g}", dt); // "3/9/2008 4:05 PM" ShortDate+ShortTime String.Format("{0:G}", dt); // "3/9/2008 4:05:07 PM" ShortDate+LongTime String.Format("{0:m}", dt); // "March 09" MonthDay String.Format("{0:y}", dt); // "March, 2008" YearMonth String.Format("{0:r}", dt); // "Sun, 09 Mar 2008 16:05:07 GMT" RFC1123 String.Format("{0:s}", dt); // "2008-03-09T16:05:07" SortableDateTime String.Format("{0:u}", dt); // "2008-03-09 16:05:07Z" UniversalSortableDateTime

    Read the article

  • "Bad binary signature" in ASP.NET MVC application

    - by David M
    We are getting the error above on some pages of an ASP.NET MVC application when it is deployed to a 64 bit Windows 2008 server box. It works fine on our development machines, though these are 32 bit XP. Just wondered if anyone had encountered this before, and has any suggestions? Details as follows: Bad binary signature. (Exception from HRESULT: 0x80131192) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.COMException: Bad binary signature. (Exception from HRESULT: 0x80131192) All projects are set to compile for Any CPU, and are compiled in Release mode. The ASP.NET site is precompiled, and the precompiled build is on a 64 bit Windows 2008 TeamCity build agent. Thanks in advance. EDIT We're still plagued by this. I have looked at all the binaries in the website's bin directory using corflags.exe. None has the 32BIT flag set, and all have a CorFlags value of 9 except for Antlr3.Runtime.dll which has a value of 1. The problem only affects certain pages, and it seems to be those which use FluentValidation (including FluentValidation.Mvc and FluentValidation.xValIntegration assemblies). None of these shows anything out of the ordinary when inspected with corflags.exe, and there are no odd looking dependencies revealed by ildasm. When built locally (32 bit Windows XP) the site deploys and runs fine. When built on the build agents (64 bit Windows 2008 Server) the site displays these errors. The site runs in Integrated Pipeline mode, and is not set to 32 bit. The stack trace is: [COMException (0x80131192): Bad binary signature. (Exception from HRESULT: 0x80131192)] ASP.views_user_newinternal_aspx.__RenderContent2(HtmlTextWriter __w, Control parameterContainer) in e:\TeamCity\buildAgent\work\605ee6b4a5d1dd36\...Admin.Mvc\Views\User\NewInternal.aspx:53 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +115 ASP.views_shared_site_master.__Render__control1(HtmlTextWriter __w, Control parameterContainer) in e:\TeamCity\buildAgent\work\605ee6b4a5d1dd36\...Admin.Mvc\Views\Shared\Site.Master:26 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +115 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.Page.Render(HtmlTextWriter writer) +38 System.Web.Mvc.ViewPage.Render(HtmlTextWriter writer) +94 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4240

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >