Search Results

Search found 27011 results on 1081 pages for 'buy vs build'.

Page 368/1081 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • When to define SDD(System Sequence Diagram) operations System->Actor?

    - by devoured elysium
    I am having some trouble understanding how to make System Sequence Diagrams, as I don't fully grasp why in some cases one should define operations for System - Actor and in others don't. Here is an example: Let's assume the System is a Cinema Ticket Store and the Actor is a client that wants to buy a ticket. 1) The User tells the System that wants to buy some tickets, stating his client number. 2) The System confirms that the given client number is valid. 3) The User tells the System the movie that wants to see. 4) The System shows the set of available sessions and seats for that movie. 5) The System asks the user which session/seat he wants. 6) The User tells the System the chosen session/seat. This would be converted to: a) -----> tellClientNumber(clientNumber) b) <----- validClientNumber c) -----> tellMovieToSee(movie) d) <----- showsAvailableSeatsHours e) -----> tellSystemChosenSessionSeat(session, seat) I know that when we are dealing with SDD's we are still far away from coding. But I can't help trying to imagine how it how it would have been had I to convert it right away to code: I can understand 1) and 2). It's like if it was a C#/Java method with the following signature: boolean tellClientNumber(clientNumber) so I put both on the SDD. Then, we have the pair 3) 4). I can imagine that as something as: SomeDataStructureThatHoldsAvailableSessionsSeats tellSystemMovieToSee(movie) Now, the problem: From what I've come to understand, my lecturer says that we shouldn't make an operation on the SDD for 5) as we should only show operations from the Actor to the System and when the System is either presenting us data (as in c)) or validating sent data (such as in b)). I find this odd, as if I try to imagine this like a DOS app where you have to put your input sequencially, it makes sense to make an arrow even for 5). Why is this wrong? How should I try to visualize this? Thanks

    Read the article

  • CPU overheating because of Delphi IDE

    - by Altar
    I am using Delphi 7 but I have trialed the Delphi 2005 - 2010 versions. In all these new versions my CPU utilization is 50% (one core is 100%, the other is "relaxed") when Delphi IDE is visible on screen. It doesn't happen when the IDE is minimized. My computer is overheating because of this. Any hints why this happens? It looks like if I want to upgrade to Delphi 2010, I need to upgrade my cooling system first. And I am a bit lazy about that, especially that I want to discahrge my computer and buy a new one (in the next 6 months) - probably I will have to buy a Win 7 license too. Edit: My CPU is AMD Dual Core 4600+. 4GB RAM Overheating means that temperature of the HDDs is raising because the CPU. Edit: Comming to a possible solution I just deleted all HKCU/CodeGear key and let started Delphi as "new". Guess what? The CPU utilization is 0%. I will investigate more.

    Read the article

  • SSL in tomcat with apr and Centos 6

    - by Jonathan
    I'm facing a problem setting up my tomcat with apr native lib, I have the following: Tomcat: 7.0.42 Java: 1.7.0_40-b43 OS: Centos 6.4 (2.6.32-358.18.1.el6.i686) APR: 1.3.9 Native lib: 1.1.27 OpenSSL: openssl-1.0.0-27.el6_4.2.i686 My server.xml looks like: ... <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> ... <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" SSLCertificateFile="/tmp/monitoringPortalCert.pem" SSLCertificateKeyFile="/tmp/monitoringPortalKey.pem" SSLPassword="hide" /> ... I compiled the native lib as follow: ./configure --with-apr=/usr/bin/apr-1-config --with-ssl=yes --prefix=$CATALINA_HOME make && make install The APR is loaded ok: Oct 06, 2013 7:55:14 PM org.apache.catalina.core.AprLifecycleListener init INFO: Loaded APR based Apache Tomcat Native library 1.1.27 using APR version 1.3.9. But I'm still having this error: SEVERE: Failed to initialize the SSLEngine. org.apache.tomcat.jni.Error: 70023: This function has not been implemented on this platform ./configure outcome [root@localhost native]# ./configure --with-apr=/usr/bin/apr-1-config --with-ssl=yes -- prefix=$CATALINA_HOME && make && make install checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking for working mkdir -p... yes Tomcat Native Version: 1.1.27 checking for chosen layout... tcnative checking for APR... yes setting CC to "gcc" setting CPP to "gcc -E" checking for JDK location (please wait)... /usr/java/jdk1.7.0_40 from environment checking Java platform... checking Java platform... checking for sablevm... NONE adding "-I/usr/java/jdk1.7.0_40/include" to TCNATIVE_PRIV_INCLUDES checking os_type directory... linux adding "-I/usr/java/jdk1.7.0_40/include/linux" to TCNATIVE_PRIV_INCLUDES checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for OpenSSL library... using openssl from /usr/lib and /usr/include checking OpenSSL library version... ok checking for OpenSSL DSA support... yes setting TCNATIVE_LDFLAGS to "-lssl -lcrypto" adding "-DHAVE_OPENSSL" to CFLAGS setting TCNATIVE_LIBS to "" setting TCNATIVE_LIBS to " /usr/lib/libapr-1.la -lpthread" configure: creating ./config.status config.status: creating tcnative.pc config.status: creating Makefile config.status: executing default commands make[1]: Entering directory `/usr/apache-tomcat-7.0.42/bin/tomcat-native-1.1.27- src/jni/native' make[1]: Nothing to be done for `local-all'. make[1]: Leaving directory `/usr/apache-tomcat-7.0.42/bin/tomcat-native-1.1.27- src/jni/native' make[1]: Entering directory `/usr/apache-tomcat-7.0.42/bin/tomcat-native-1.1.27- src/jni/native' make[1]: Nothing to be done for `local-all'. make[1]: Leaving directory `/usr/apache-tomcat-7.0.42/bin/tomcat-native-1.1.27- src/jni/native' /usr/lib/apr-1/build/mkdir.sh /usr/apache-tomcat-7.0.42/include/apr-1 /usr/apache- tomcat-7.0.42/lib/pkgconfig \ /usr/apache-tomcat-7.0.42/lib /usr/apache-tomcat-7.0.42/bin /usr/bin/install -c -m 644 tcnative.pc /usr/apache-tomcat-7.0.42/lib/pkgconfig/tcnative- 1.pc list=''; for i in $list; do \ ( cd $i ; make DESTDIR= install ); \ done /bin/sh /usr/lib/apr-1/build/libtool --mode=install /usr/bin/install -c -m 755 libtcnative-1.la /usr/apache-tomcat-7.0.42/lib libtool: install: /usr/bin/install -c -m 755 .libs/libtcnative-1.so.0.1.27 /usr/apache- tomcat-7.0.42/lib/libtcnative-1.so.0.1.27 libtool: install: (cd /usr/apache-tomcat-7.0.42/lib && { ln -s -f libtcnative- 1.so.0.1.27 libtcnative-1.so.0 || { rm -f libtcnative-1.so.0 && ln -s libtcnative- 1.so.0.1.27 libtcnative-1.so.0; }; }) libtool: install: (cd /usr/apache-tomcat-7.0.42/lib && { ln -s -f libtcnative- 1.so.0.1.27 libtcnative-1.so || { rm -f libtcnative-1.so && ln -s libtcnative-1.so.0.1.27 libtcnative-1.so; }; }) libtool: install: /usr/bin/install -c -m 755 .libs/libtcnative-1.lai /usr/apache-tomcat- 7.0.42/lib/libtcnative-1.la libtool: install: /usr/bin/install -c -m 755 .libs/libtcnative-1.a /usr/apache-tomcat- 7.0.42/lib/libtcnative-1.a libtool: install: chmod 644 /usr/apache-tomcat-7.0.42/lib/libtcnative-1.a libtool: install: ranlib /usr/apache-tomcat-7.0.42/lib/libtcnative-1.a libtool: install: warning: remember to run `libtool --finish /usr/local/apr/lib' make && make install outcome: make[1]: Entering directory `/usr/apache-tomcat-7.0.42/bin/tomcat-native-1.1.27- src/jni/native' make[1]: Nothing to be done for `local-all'. make[1]: Leaving directory `/usr/apache-tomcat-7.0.42/bin/tomcat-native-1.1.27- src/jni/native' make[1]: Entering directory `/usr/apache-tomcat-7.0.42/bin/tomcat-native-1.1.27- src/jni/native' make[1]: Nothing to be done for `local-all'. make[1]: Leaving directory `/usr/apache-tomcat-7.0.42/bin/tomcat-native-1.1.27- src/jni/native' /usr/lib/apr-1/build/mkdir.sh /usr/apache-tomcat-7.0.42/include/apr-1 /usr/apache- tomcat-7.0.42/lib/pkgconfig \ /usr/apache-tomcat-7.0.42/lib /usr/apache-tomcat-7.0.42/bin /usr/bin/install -c -m 644 tcnative.pc /usr/apache-tomcat-7.0.42/lib/pkgconfig/tcnative- 1.pc list=''; for i in $list; do \ ( cd $i ; make DESTDIR= install ); \ done /bin/sh /usr/lib/apr-1/build/libtool --mode=install /usr/bin/install -c -m 755 libtcnative-1.la /usr/apache-tomcat-7.0.42/lib libtool: install: /usr/bin/install -c -m 755 .libs/libtcnative-1.so.0.1.27 /usr/apache- tomcat-7.0.42/lib/libtcnative-1.so.0.1.27 libtool: install: (cd /usr/apache-tomcat-7.0.42/lib && { ln -s -f libtcnative- 1.so.0.1.27 libtcnative-1.so.0 || { rm -f libtcnative-1.so.0 && ln -s libtcnative- 1.so.0.1.27 libtcnative-1.so.0; }; }) libtool: install: (cd /usr/apache-tomcat-7.0.42/lib && { ln -s -f libtcnative- 1.so.0.1.27 libtcnative-1.so || { rm -f libtcnative-1.so && ln -s libtcnative-1.so.0.1.27 libtcnative-1.so; }; }) libtool: install: /usr/bin/install -c -m 755 .libs/libtcnative-1.lai /usr/apache-tomcat- 7.0.42/lib/libtcnative-1.la libtool: install: /usr/bin/install -c -m 755 .libs/libtcnative-1.a /usr/apache-tomcat- 7.0.42/lib/libtcnative-1.a libtool: install: chmod 644 /usr/apache-tomcat-7.0.42/lib/libtcnative-1.a libtool: install: ranlib /usr/apache-tomcat-7.0.42/lib/libtcnative-1.a libtool: install: warning: remember to run `libtool --finish /usr/local/apr/lib' It seems everything is fine, but the error is not self-explanatory Could you guys help to understand where my error is? What am I missing? Thanks in advance for your support.

    Read the article

  • How to find virtualization performance bottlenecks?

    - by Martin
    We have recently started moving our C++ build server(s) from real machines into VMs. (MS Hyper-V) We have some performance issues that I've currently no idea how to address. We have: Test-Box - this is a piece of desktop workstation hardware my co-worker used to set up the VM before we moved it to the actual server hardware Srv-Box - this is the server hardware Test-Box-Real - This is Windows running directly on the Test-Box HW Test-Box-VM - This is Windows in a Hyper-V VM on the Test-Box HW Srv-Box-Real- This is Server2008R2 running on the Srv-Box HW. Srv-Box-VM- This is Windows running in a Hyper-V VM on the Srv-Box HW, i.e. on Srv-Box-Real. Now, the problem is that we compared Build times between Test-Box-Real and Test-Box-VM and they were basically equal (within about 2%). Then we moved the VM to the Srv-Box machine and what we saw there is that we have a significant performance degradation between Srv-Box-Real and Srv-Box-VM, that is, where we saw no differences on the Test HW we now do see major differences in performance on the actual Server HW. (Builds about ~~ 50% slower inside the VM.) I should add that both the Test-Box and the Srv-Box are only running this one single VM and doing nothing else. I should also note that the "Real" OS is Win2008R2(64bit) and the VM hosted OS is Wind2003R2(32bit). Hardware specs: Srv-Box: Intel XEON E5640 @ 2.67Ghz (This means 8 cores with hyperthreading on the Real system and "only" 4 cores on the VM, since Hyper-V doesn't allow for hyperthreading, but number of cores doesn't seem to explain the problem here.) 16GB RAM (we have 4GB assigned to the VM) Virtual DELL RAID 1 (2x 450GB HUS156045VLS600 Hitachi 15k SAS drives) Test-Box: Intel XEON E31245 @ 3.3GHz 16GB RAM WD VelociRaptor 600GB 10k RPM SATA Note again that I'm only concerned with the differences between Srv-Box-Real and Srv-Box-VM (high) vs. the differences seen btw. Test-Box-Real and Test-Box-VM (low). Why would one machine have parity when comparing VM vs Real performance and the other (server grade HW no less) would have a large disparity? (Both being XEON CPUs ...)

    Read the article

  • Visual Studio 2012 Very Slow Typing

    - by DaoCacao
    I have a problem. After SP1 update, passing some time, VS 2012 becomes very-very slow when typing text. Solution size is not big, PC is quite powerful, it has 16GB of RAM, SSD drive, and i7-2600. I have attached using another VS and I see in debugger a lot of exceptions: First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC (KernelBase.dll) in devenv.exe: 0xE0434352 (parameters: 0x80131509, 0x00000000, 0x00000000, 0x00000000, 0x64BF0000). First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC (KernelBase.dll) in devenv.exe: 0xE0434352 (parameters: 0x80131509, 0x00000000, 0x00000000, 0x00000000, 0x64BF0000). The thread 0x288c has exited with code 0 (0x0). Anyone have any ideas on what CVcsException is? Googling it gives almost nothing. How do I get rid of this problem?

    Read the article

  • Amazon EC2 Reserved Instances: "Heavy Utilization" clarification

    - by gravyface
    Should be another easy one here, but I need clarification on what they define as "heavy utilization" for Reserved Instance types. From their Website: Heavy Utilization RIs – Heavy Utilization RIs offer the most absolute savings of any Reserved Instance type. They’re most appropriate for steady-state workloads where you’re willing to commit to always running these instances in exchange for our lowest hourly usage fee. With this RI, you pay a little higher upfront payment than Medium Utilization RIs, a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase. Using Heavy Utilization RIs, you can save up to 41% for a 1-year term and 58% for a 3-year term vs. running On-Demand Instances. If you’re trying to find a break-even utilization, you’re economically advantaged using Heavy Utilization RIs (vs. On-Demand Instances) if you plan to use your instance more than 43% of a 1-year term or 79% of a 3-year term. I'm assuming that, if I'm planning on running a 24/7 Web Server, then regardless of how many resources I consume (bandwidth, cpu cycles, memory), I would want to go with a Heavy Utilization Reserved Instance? This one Web Server in particular will likely barely budge the cpu, but it needs to be up and running 24/7. Not 100% on what they're defining as "heavy".

    Read the article

  • How can I block a specific type of DDoS attack?

    - by Mark
    My site is being attacked and is using up all the RAM. I looked at the Apache logs and every malicious hit seems to simply be a POST request on /, which is never required by a normal user. So I thought and wondered if there's any sort of solution or utility that will monitor my Apache logs and block every IP that performs a POST request on the site root. I'm not familiar with DDoS protection and searching didn't seem to give me an answer, so I came here. Thanks. Example logs: 103.3.221.202 - - [30/Sep/2012:16:02:03 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 122.72.80.100 - - [30/Sep/2012:16:02:03 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11" 122.72.28.15 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)" 210.75.120.5 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Windows NT 6.1; rv:12.0) Gecko/20100101 Firefox/12.0" 122.96.59.103 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Linux; U; Android 2.2; fr-fr; Desire_A8181 Build/FRF91) App3leWebKit/53.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" 122.96.59.103 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Linux; U; Android 2.2; fr-fr; Desire_A8181 Build/FRF91) App3leWebKit/53.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" 122.72.124.3 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:13.0) Gecko/20100101 Firefox/13.0.1" 122.72.112.148 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:13.0) Gecko/20100101 Firefox/13.0.1" 190.39.210.26 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.0" 302 485 "-" "Mozilla/5.0 (Windows NT 6.0; rv:13.0) Gecko/20100101 Firefox/13.0.1" 210.213.245.230 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.0" 302 485 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)" 101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 101.44.1.28 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 101.44.1.28 - - [30/Sep/2012:16:02:14 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 103.3.221.202 - - [30/Sep/2012:16:02:13 +0000] "POST / HTTP/1.1" 302 466 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 211.161.152.104 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" 101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11" 101.44.1.25 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11" 211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6" 211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; MRA 5.8 (build 4157); .NET CLR 2.0.50727; AskTbPTV/5.11.3.15590)" 211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; MRA 5.8 (build 4157); .NET CLR 2.0.50727; AskTbPTV/5.11.3.15590)" 101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11" 101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 211.161.152.108 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 101.44.1.28 - - [30/Sep/2012:16:02:13 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 211.161.152.106 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:5.0.1) Gecko/20100101 Firefox/5.0.1" 103.3.221.202 - - [30/Sep/2012:16:02:13 +0000] "POST / HTTP/1.1" 302 466 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 101.44.1.28 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; MRA 5.8 (build 4157); .NET CLR 2.0.50727; AskTbPTV/5.11.3.15590)" 211.161.152.104 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" 211.161.152.104 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" 211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6" 101.44.1.25 - - [30/Sep/2012:16:02:10 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11" 122.72.124.2 - - [30/Sep/2012:16:02:17 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 122.72.124.2 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 122.72.124.2 - - [30/Sep/2012:16:02:17 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 210.213.245.230 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.0" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)" iptables -L: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination - bui@debian:~$ sudo iptables -I INPUT 1 -m string --algo bm --string 'Keep-Alive: 300' -j DROP iptables: No chain/target/match by that name. bui@debian:~$ sudo iptables -A INPUT -m string --algo bm --string 'Keep-Alive: 300' -j DROP iptables: No chain/target/match by that name.

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • Comparing 2 (or 3 Files If Possible) "Line By Line"

    - by PythEch
    I want to find out the differences of 2 (or 3 files if possible) line by line. Diff utils can do this, however it gives inaccurate results. Because, 2 files have exact number of lines which is "134". But diff gives me "Added Lines" and "Removed Lines". However this is wrong, they have exact the same number of lines, there is no added or removed lines. The text files which I want to find differences of them, have only numbers written, maybe that's why that algortihm fails. I couldn't find any option to prevent that, however I may be wrong, I mean there should be an option for that, but again, I couldn't find. This is what I get (5am.txt vs 6am.txt, there is a huge problem): This is what I want (6am.txt vs 7am.txt, still has problems): But, first the first image still has this problem, at the last lines. Edit: After I figured out that there is no utility to do this, I handled myself. I almost did the same thing as what RedGrittyBrick have done. This script imitates diff utility so I (or you) can use it with diff2html. To use it with diff2html, just change line diff_stdout = os.popen("diff %s" % string.join(argv[1:]), "r") to diff_stdout = os.popen("script.py %s" % string.join(argv[1:]), "r") and name this script whatever you want: import sys f1=open(sys.argv[1],"r") f1_read=f1.readlines() f1.close() f2=open(sys.argv[2],"r") f2_read=f2.readlines() f2.close() changed={} first_c = "" for n in range(len(f1_read)): if f1_read[n]!=f2_read[n]: if first_c == "": first_c=n+1 changed[first_c]=n+1 else: first_c="" #Let's imitate diff-utils... for (x, y) in changed.items(): print "%d,%dc%d,%d" % (x,y,x,y) for i in range(x,y+1): sys.stdout.write("< %s" % f1_read[i-1]) print "---" for i in range(x,y+1): sys.stdout.write("> %s" % f2_read[i-1]) Final results:

    Read the article

  • Servers / ram for social network- how many?

    - by Marty
    I am launching my social network soon an looking into hosting. The question i am lost is: Do i need separate servers for web vs database vs image handling since there is photo sharing? Or does 1 server handle it all? Also is more ram better? If i get 50GB ram is that better than having 8 gb ram? EDIT: It is PHP codeignitor and MySQL for now. (switch to NoSQL DB later if demand calls fr it.) I will be using memcache also. Concept wise it is similar to yelp, so geographic based with lots of user content and image sharing + live feeds an privacy levels. User plan is open question. Without testing the demand for this i cant give a number. But the concept is unique, no one out there with the set of features i am releasing so it could grow. Ideally I want to plan for handling about 1-2 million views / month from launch. If it goes more than that then I will upgrade.

    Read the article

  • Apache Named Virtual Hosts and HTTPS

    - by Freddie Witherden
    I have an SSL certificate which is valid for multiple (sub-) domains. In Apache I have configured this as follows: In /etc/apache2/apache2.conf NameVirtualHost <my ip>:443 Then for one named virtual host I have <VirtualHost <my ip>:443> ServerName ... SSLEngine on SSLCertificateFile ... SSLCertificateKeyFile ... SSLCertificateChainFile ... SSLCACertificateFile ... </VirtualHost> Finally, for every other site I want to be accessible over HTTPS I just have a <VirtualHost <my ip>:443> ServerName ... </VirtualHost> The good news is that it works. However, when I start Apache I get warning messages [warn] Init: SSL server IP/port conflict: Domain A:443 (...) vs. Domain B:443 (...) [warn] Init: SSL server IP/port conflict: Domain C:443 (...) vs. Domain B:443 (...) [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! So, my question is: how should I be configuring this? Clearly from the warning messages I am doing something wrong (although it does work!), however, the above configuration was the only one I could get to work. It is somewhat annoying as the configuration files have an explicit dependence on my IP address.

    Read the article

  • Measuring performance indicators on a cluster

    - by Aditya Singh
    My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances. Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request. I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

    Read the article

  • Hybrid Exchange Online setup with on premise public folders, certificate issues?

    - by exxoid
    We have a Hybrid Exchange setup with Exchange Online (v15 tenant) and Exchange 2010 on premise. The hybrid configuration for the most part is working, what I am having an issue with is getting public folders to work for cloud users. I followed the official documentation here (http://technet.microsoft.com/en-us/library/dn249373(v=exchg.150).aspx) and it kind of works. When I am accessing Outlook on a public wifi I am able to bring up the cloud mailboxes and on premise public folders show up in Outlook. When I am accessing email via Outlook as a cloud user on the same LAN as the on premise exchange, the cloud user makes the outlook.com connection for live/ad/archive mailbox but fails to create a proxy connection for the on premise public folders. The error I get is a certificate mismatch, it seems that when a user on the LAN accesses Outlook/Exchange it is using a different certificate vs. when Outlook is launched on a WiFi network. When I look at the Outlook connection information, I see the connection to outlook.com for ad/live/archive mailbox but no entry for public folder connection. Our on premise Exchange is 2010 SP3 with latest CUs. The client is a domain joined laptop with Windows 7 and Office 2010 SP2, latest windows updates applied. Our infrastructure has a working ADFS 3 and DirSync setup for Office 365. My question then is, what do I need to do to make sure that the Cloud user launching Outlook on the LAN uses the proper certificate (the wildcard 3rd party cert.. vs. the self signed certificate which it looks like it may be using during the connection attempt).

    Read the article

  • Unified inbox shows twice on Thunderbird

    - by That Help Vampire Guy
    I'm using Thunderbird 24. If I show folders in Unified mode, my inbox folder shows up twice. If I choose the "All" folders mode, I see only one inbox. The issue started when I was using Ubuntu 12.04, but now I'm on Fedora 19. (I have migrated the folders on /home). I do remember having it not-duplicated, but then it started while still on Ubuntu. I noticed it when using the Converation plugin, but I had previously used the plugin without it happening. I have disabled the plugin and it persists. What I have tried If I close Thunderbird, rename the .thunderbird folder on my /home to something else, then it will create a new config profile, I have to set up everything again, and then it works as expected, see images below: Before resetting Unified vs All Folders After resetting Unified vs All Folders (I'm trying to avoid resetting the profile and creating a fresh new one, because the server -- MS Exchange -- doesn't support IMAP labels, so I'd lose all the tags on my messages, and I have organized it based on tags instead of folder).

    Read the article

  • Sizing Switches for Storage and Production

    - by Untalented
    Couple questions. Should you always completely separate the storage network switches from production switches or are VLANs fine to segment this traffic? Is there a golden rule here? How do you properly size a switch for your environment based on the specifications the manufacturer provide (Throughput, Forwarding Throughput, Stacking Throughput, Max Mac)? If you have two switch options and one has a maximum Mac address of 8,000 vs. another with 16,0000. What does this really mean to me? How do make sure one vs. another is sized properly for me? Besides VLAN and Jumbo Frame support, is there any other "Must" haves for a virtual environments production or storage networks? There is a wealth of knowledge on sizing SANs and such, but this seems equally important and it's quite challenging to find as much information. -- Just to add some tidbits of information for the environment. This setup above is referring to the data centers which supports two different locations which have about 100 users between the two in total. The storage traffic will be iSCSI and will be 3 ESXi Hosts and one SAN housing about 2.7TB of data. Since there is currently no storage network in place (no SAN), I'm having a hard time regarding #2 to really determine what backplane throughput and switch specifications will be sufficient.

    Read the article

  • Can my employer force me to backup my personal machine? [closed]

    - by Eric B
    Here's the background: Approximately 1.25 years ago, the company I work for was acquired by a larger 400 person company. Before acquisition (and today still) we are all remote employees using our own personal hardware for work-related duties (coding, email, etc). We are approximately 15 employees within the larger organization. Some time after acquisition, the now owning company was slapped with a civil lawsuit. Part of this lawsuit (discovery) is requiring them to retrieve & store from us any related information. Because we were a separate company up until acquisition, there is a high probability that our personal machines might contain information about what the lawsuit alleges (email, documents, chat logs?, etc). Obviously, this depends largely on the person's job function (engineer vs. customer support vs. CEO). All employees are being required to comply. Since acquisition (1.25 yrs), the new company has not provided us with company laptops/desktops. We continue to use personal hardware, licenses, etc for work. Email is via POP3s and not hanging around on the mail server - it's on everyone's client. Documents are spread across personal machines. So, now they want us each to backup our complete personal machines. They are allowing us to create a "personal" folder where we can place personal documents. That single folder will be excluded from backup. Of course, that means total re-arrangement of documents, etc. For most of us, 99% of the data on the machine is NOT related to work. So, what's the consensus? Should we comply? What is their recourse if we do not?

    Read the article

  • Converting a Visual Studio 2003 Web Project to a Visual Studio 2008 Web Application Project

    - by navaneeth
    This walkthrough describes how to convert a Visual Studio .NET 2002 or Visual Studio .NET 2003 Web project to a Visual Studio 2008 Web application project. The Visual Studio 2008 Web application project model is like the Visual Studio 2005 Web application project model. Therefore, the conversion processes are similar. For more information about Web application projects, see ASP.NET Web Application Projects. You can also convert from a Visual Studio .NET Web project to a Visual Studio 2008 Web site project. However, conversion to a Web application project is the approach that is supported, and gives you the convenience of tools to help with the conversion. For example, when you convert to a Visual Studio 2008 Web application project, you can use the Visual Studio Conversion Wizard to automate part of the process. For information about how to convert a Visual Studio .NET Web project to a Visual Studio 2008 Web site, see Common Web Project Conversion Issues and Solutions. There are two parts involved in converting a Visual Studio 2002 or 2003 Web project to a Visual Studio 2008 Web application project. The parts are as follows: Converting the project. You can use the Visual Studio Conversion Wizard for the initial conversion of the project and Web.config files. You can later use the Convert To Web Application command to update the project's files and structure. Upgrading the .NET Framework version of the project. You must upgrade the project's .NET Framework version to either .NET Framework 2.0 SP1 or to .NET Framework 3.5. This .NET Framework version upgrade is required because Visual Studio 2008 cannot target earlier versions of the .NET Framework. You can perform this upgrade during the project conversion, by using the Conversion Wizard. Alternatively, you can upgrade the .NET Framework version after you convert the project.   NoteYou can change a project's .NET Framework version manually. To do so, in Visual Studio open the property pages for the project, click the Application tab, and then select a new version from the Target Framework list. This walkthrough illustrates the following tasks: Opening the Visual Studio .NET project in Visual Studio 2008 and creating a backup of the project files. Upgrading the .NET Framework version that the project targets. Converting the project file and the Web.config file. Converting ASP.NET code files. Testing the converted project. Prerequisites    To complete this walkthrough, you will need: Visual Studio 2008. A Web site project that was created in Visual Studio .NET version 2002 or 2003 that compiles and runs without errors. Converting the Project and Upgrading the .NET Framework Version    To begin, you open the project in Visual Studio 2008, which starts the conversion. It offers you an opportunity to back up the project before converting it. NoteIt is strongly recommended that you back up the project. The conversion works on the original project files, which cannot be recovered if the conversion is not successful.To convert the project and back up the files In Visual Studio 2008, in the File menu, click Open and then click Project. The Open Project dialog box is displayed. Browse to the folder that contains the project or solution file for the Visual Studio .NET project, select the file, and then click Open. NoteMake sure that you open the project by using the Open Project command. If you use the Open Web Site command, the project will be converted to the Web site project format.The Conversion Wizard opens and prompts you to create a backup before converting the project. To create the backup, click Yes. Click Browse, select the folder in which the backup should be created, and then click Next. Click Finish. The backup starts. NoteThere might be significant delays as the Conversion Wizard copies files, with no updates or progress indicated. Wait until the process finishes before you continue.When the conversion finishes, the wizard prompts you to upgrade the targeted version of the .NET Framework for the project. To upgrade to the .NET Framework 3.5, click Yes. To upgrade the project to target the .NET Framework 2.0 SP1, click No. It is recommended that you leave the check box selected that asks whether you want to upgrade all Webs in the solution. If you upgrade to .NET Framework 3.5, the project's Web.config file is modified at the same time as the project file. When the upgrade and conversion have finished, a message is displayed that indicates that you have completed the first step in converting your project. Click OK. The wizard displays status information about the conversion. Click Close. Testing the Converted Project    After the conversion has finished, you can test the project to make sure that it runs. This will also help you identify code in the project that must be updated. To verify that the project runs If you know about changes that are required for the code to run with the new version of the .NET Framework, make those changes. In the Build menu, click Build. Any missing references or other compilation issues in the project are displayed in the Error List window. The most likely issues are missing assembly references or issues with dynamically generated types. In Solution Explorer, right-click the Web page that will be used to launch the application, and then click Set as Start Page. On the Debug menu, click Start Debugging. If debugging is not enabled, the Debugging Not Enabled dialog box is displayed. Select the option to add a Web.config file that has debugging enabled, and then click OK. Verify that the converted project runs as expected. Do not continue with the conversion process until all build and run-time errors are resolved. Converting ASP.NET Code Files    ASP.NET Web page files and user-control files in Visual Studio 2008 that use the code-behind model have an associated designer file. The files that you just converted will have an associated code-behind file, but no designer file. Therefore, the next step is to generate designer files. NoteOnly ASP.NET Web pages and user controls that have their code in a separate code file require a separate designer file. For pages that have inline code and no associated code file, no designer file will be generated.To convert ASP.NET code files In Solution Explorer, right-click the project node, and then click Convert To Web Application. The files are converted. Verify that the converted code files have a code file and a designer file. Build and run the project to verify the results of the conversion.

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Michael Crump&rsquo;s notes for 70-563 PRO &ndash; Designing and Developing Windows Applications usi

    - by mbcrump
    TIME TO GO PRO! This is my notes for 70-563 PRO – Designing and Developing Windows Applications using .NET Framework 3.5 I created it using several resources (various certification web sites, msdn, official ms 70-548 book). The reason that I created this review is because a) I am taking the exam. b) MS did not create a book for this exam. Use the(MS 70-548)book. c) To make sure I am familiar with each before the exam. I hope that it provides a good start for your own notes. I hope that someone finds this useful. At least, it will give you a starting point of what to expect to know on the PRO exam. Also, for those wondering, the PRO exam does contains very little code. It is basically all theory. 1. Validation Controls – How to prevent users from entering invalid data on forms. (MaskedTextBox control and RegEx) 2. ServiceController – used to start and control the behavior of existing services. 3. User Feedback (know winforms Status Bar, Tool Tips, Color, Error Provider, Context-Sensitive and Accessibility) 4. Specific (derived) exceptions must be handled before general (base class) exceptions. By moving the exception handling for the base type Exception to after exception handling of ArgumentNullException, all ArgumentNullException thrown by the Helper method will be caught and logged correctly. 5. A heartbeat method is a method exposed by a Web service that allows external applications to check on the status of the service. 6. New users must master key tasks quickly. Giving these tasks context and appropriate detail will help. However, advanced users will demand quicker paths. Shortcuts, accelerators, or toolbar buttons will speed things along for the advanced user. 7. MSBuild uses project files to instruct the build engine what to build and how to build it. MSBuild project files are XML files that adhere to the MSBuild XML schema. The MSBuild project files contain complete file, build action, and dependency information for each individual projects. 8. Evaluating whether or not to fix a bug involves a triage process. You must identify the bug's impact, set the priority, categorize it, and assign a developer. Many times the person doing the triage work will assign the bug to a developer for further investigation. In fact, the workflow for the bug work item inside of Team System supports this step. Developers are often asked to assess the impact of a given bug. This assessment helps the person doing the triage make a decision on how to proceed. When assessing the impact of a bug, you should consider time and resources to fix it, bug risk, and impacts of the bug. 9. In large projects it is generally impossible and unfeasible to fix all bugs because of the impact on schedule and budget. 10. Code reviews should be conducted by a technical lead or a technical peer. 11. Testing Applications 12. WCF Services – application state 13. SQL Server 2005 / 2008 Express Edition – reliable storage of data / Microsoft SQL Server 3.5 Compact Database– used for client computers to retrieve and save data from a shared location. 14. SQL Server 2008 Compact Edition – used for minimum possible memory and can synchronize data with a corporate SQL Server 2008 Database. Supports offline user and minimum dependency on external components. 15. MDI and SDI Forms (specifically IsMDIContainer) 16. GUID – in the case of data warehousing, it is important to define unique keys. 17. Encrypting / Security Data 18. Understanding of Isolated Storage/Proper location to store items 19. LINQ to SQL 20. Multithreaded access 21. ADO.NET Entity Framework model 22. Marshal.ReleaseComObject 23. Common User Interface Layout (ComboBox, ListBox, Listview, MaskedTextBox, TextBox, RichTextBox, SplitContainer, TableLayoutPanel, TabControl) 24. DataSets Class - http://msdn.microsoft.com/en-us/library/system.data.dataset%28VS.71%29.aspx 25. SQL Server 2008 Reporting Services (SSRS) 26. SystemIcons.Shield (Vista UAC) 27. Leverging stored procedures to perform data manipulation for a database schema that can change. 28. DataContext 29. Microsoft Windows Installer Packages, ClickOnce(bootstrapping features), XCopy. 30. Client Application Services – will authenticate users by using the same data source as a ASP.NET web application. 31. SQL Server 2008 Caching 32. StringBuilder 33. Accessibility Guidelines for Windows Applications http://msdn.microsoft.com/en-us/library/ms228004.aspx 34. Logging erros 35. Testing performance related issues. 36. Role Based Security, GenericIdentity and GenericPrincipal 37. System.Net.CookieContainer will store session data for webapps (see isolated storage for winforms) 38. .NET CLR Profiler tool will identify objects that cause performance issues. 39. ADO.NET Synchronization (SyncGroup) 40. Globalization - CultureInfo 41. IDisposable Interface- reports on several questions relating to this. 42. Adding timestamps to determine whether data has changed or not. 43. Converting applications to .NET Framework 3.5 44. MicrosoftReportViewer 45. Composite Controls 46. Windows Vista KNOWN folders. 47. Microsoft Sync Framework 48. TypeConverter -Provides a unified way of converting types of values to other types, as well as for accessing standard values and sub properties. http://msdn.microsoft.com/en-us/library/system.componentmodel.typeconverter.aspx 49. Concurrency control mechanisms The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations, and then abort a transaction, if the desired rules are violated. Pessimistic - Block operations of a transaction, if they may cause violation of the rules. Semi-optimistic - Block operations in some situations, and do not block in other situations, while delaying rules checking to transaction's end, as done with optimistic. 50. AutoResetEvent 51. Microsoft Messaging Queue (MSMQ) 4.0 52. Bulk imports 53. KeyDown event of controls 54. WPF UI components 55. UI process layer 56. GAC (installing, removing and queuing) 57. Use a local database cache to reduce the network bandwidth used by applications. 58. Sound can easily be annoying and distracting to users, so use it judiciously. Always give users the option to turn sound off. Because a user might have sound off, never convey important information through sound alone.

    Read the article

  • Errors trying to run MongoDB

    - by SomeKittens
    I'm running Ubuntu Server 12.04 (32 bit) on an old (1998) computer. Everything's working fine until I try and start MongoDB. somekittens@DLserver01:~$ mongo MongoDB shell version: 2.2.2 connecting to: test Sun Dec 16 22:47:50 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91 exception: connect failed Googling the error lead me to all sorts of "repair" options, none of which fixed anything. I've also removed MongoDB and installed it again (using apt-get, have not built from source). Mongo's log shows the following error: Thu Dec 13 18:36:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Thu Dec 13 18:36:32 Thu Dec 13 18:36:32 [initandlisten] MongoDB starting : pid=758 port=27017 dbpath=/var/lib/mongodb 32-bit host=DLserver01 Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Thu Dec 13 18:36:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Thu Dec 13 18:36:32 [initandlisten] ** with --journal, the limit is lower Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] db version v2.2.2, pdfile version 4.5 Thu Dec 13 18:36:32 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Thu Dec 13 18:36:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Thu Dec 13 18:36:32 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log" } Thu Dec 13 18:36:32 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/var/lib/mongodb/journal" ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Thu Dec 13 18:36:32 [initandlisten] exception in initAndListen: 12596 old lock file, terminating Thu Dec 13 18:36:32 dbexit: Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close listening sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to flush diaglog... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: waiting for fs preallocator... Thu Dec 13 18:36:32 [initandlisten] shutdown: closing all files... Thu Dec 13 18:36:32 [initandlisten] closeAllFiles() finished Thu Dec 13 18:36:32 dbexit: really exiting now Running through the recovery instructions lead to the following adventure: somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 [initandlisten] MongoDB starting : pid=1887 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:42:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:42:54 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:42:54 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:42:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:42:54 [initandlisten] options: { repair: true } Sun Dec 16 22:42:54 [initandlisten] exception in initAndListen: 10296 ********************************************************************* ERROR: dbpath (/data/db/) does not exist. Create this directory or give existing directory in --dbpath. See http://dochub.mongodb.org/core/startingandstoppingmongo ********************************************************************* , terminating Sun Dec 16 22:42:54 dbexit: Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:42:54 [initandlisten] shutdown: closing all files... Sun Dec 16 22:42:54 [initandlisten] closeAllFiles() finished Sun Dec 16 22:42:54 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data/db somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 [initandlisten] MongoDB starting : pid=1909 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:43:51 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:43:51 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:43:51 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:43:51 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:43:51 [initandlisten] options: { repair: true } Sun Dec 16 22:43:51 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Sun Dec 16 22:43:51 dbexit: Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:43:51 [initandlisten] shutdown: closing all files... Sun Dec 16 22:43:51 [initandlisten] closeAllFiles() finished Sun Dec 16 22:43:51 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:43:51 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Sun Dec 16 22:43:51 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ service mongodb stop stop: Unknown instance: somekittens@DLserver01:/var/log/mongodb$ sudo mongod --repair Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 [initandlisten] MongoDB starting : pid=1921 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:45:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:45:04 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:45:04 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:45:04 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:45:04 [initandlisten] options: { repair: true } Sun Dec 16 22:45:04 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/journal" Sun Dec 16 22:45:04 [initandlisten] finished checking dbs Sun Dec 16 22:45:04 dbexit: Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:45:04 [initandlisten] shutdown: closing all files... Sun Dec 16 22:45:04 [initandlisten] closeAllFiles() finished Sun Dec 16 22:45:04 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:45:04 dbexit: really exiting now Which didn't change anything. What can I do to resolve this? It's an old computer (640MB RAM, single-core P2). Could that be causing it?

    Read the article

  • Class-Level Model Validation with EF Code First and ASP.NET MVC 3

    - by ScottGu
    Earlier this week the data team released the CTP5 build of the new Entity Framework Code-First library.  In my blog post a few days ago I talked about a few of the improvements introduced with the new CTP5 build.  Automatic support for enforcing DataAnnotation validation attributes on models was one of the improvements I discussed.  It provides a pretty easy way to enable property-level validation logic within your model layer. You can apply validation attributes like [Required], [Range], and [RegularExpression] – all of which are built-into .NET 4 – to your model classes in order to enforce that the model properties are valid before they are persisted to a database.  You can also create your own custom validation attributes (like this cool [CreditCard] validator) and have them be automatically enforced by EF Code First as well.  This provides a really easy way to validate property values on your models.  I showed some code samples of this in action in my previous post. Class-Level Model Validation using IValidatableObject DataAnnotation attributes provides an easy way to validate individual property values on your model classes.  Several people have asked - “Does EF Code First also support a way to implement class-level validation methods on model objects, for validation rules than need to span multiple property values?”  It does – and one easy way you can enable this is by implementing the IValidatableObject interface on your model classes. IValidatableObject.Validate() Method Below is an example of using the IValidatableObject interface (which is built-into .NET 4 within the System.ComponentModel.DataAnnotations namespace) to implement two custom validation rules on a Product model class.  The two rules ensure that: New units can’t be ordered if the Product is in a discontinued state New units can’t be ordered if there are already more than 100 units in stock We will enforce these business rules by implementing the IValidatableObject interface on our Product class, and by implementing its Validate() method like so: The IValidatableObject.Validate() method can apply validation rules that span across multiple properties, and can yield back multiple validation errors. Each ValidationResult returned can supply both an error message as well as an optional list of property names that caused the violation (which is useful when displaying error messages within UI). Automatic Validation Enforcement EF Code-First (starting with CTP5) now automatically invokes the Validate() method when a model object that implements the IValidatableObject interface is saved.  You do not need to write any code to cause this to happen – this support is now enabled by default. This new support means that the below code – which violates one of our above business rules – will automatically throw an exception (and abort the transaction) when we call the “SaveChanges()” method on our Northwind DbContext: In addition to reactively handling validation exceptions, EF Code First also allows you to proactively check for validation errors.  Starting with CTP5, you can call the “GetValidationErrors()” method on the DbContext base class to retrieve a list of validation errors within the model objects you are working with.  GetValidationErrors() will return a list of all validation errors – regardless of whether they are generated via DataAnnotation attributes or by an IValidatableObject.Validate() implementation.  Below is an example of proactively using the GetValidationErrors() method to check (and handle) errors before trying to call SaveChanges(): ASP.NET MVC 3 and IValidatableObject ASP.NET MVC 2 included support for automatically honoring and enforcing DataAnnotation attributes on model objects that are used with ASP.NET MVC’s model binding infrastructure.  ASP.NET MVC 3 goes further and also honors the IValidatableObject interface.  This combined support for model validation makes it easy to display appropriate error messages within forms when validation errors occur.  To see this in action, let’s consider a simple Create form that allows users to create a new Product: We can implement the above Create functionality using a ProductsController class that has two “Create” action methods like below: The first Create() method implements a version of the /Products/Create URL that handles HTTP-GET requests - and displays the HTML form to fill-out.  The second Create() method implements a version of the /Products/Create URL that handles HTTP-POST requests - and which takes the posted form data, ensures that is is valid, and if it is valid saves it in the database.  If there are validation issues it redisplays the form with the posted values.  The razor view template of our “Create” view (which renders the form) looks like below: One of the nice things about the above Controller + View implementation is that we did not write any validation logic within it.  The validation logic and business rules are instead implemented entirely within our model layer, and the ProductsController simply checks whether it is valid (by calling the ModelState.IsValid helper method) to determine whether to try and save the changes or redisplay the form with errors. The Html.ValidationMessageFor() helper method calls within our view simply display the error messages our Product model’s DataAnnotations and IValidatableObject.Validate() method returned.  We can see the above scenario in action by filling out invalid data within the form and attempting to submit it: Notice above how when we hit the “Create” button we got an error message.  This was because we ticked the “Discontinued” checkbox while also entering a value for the UnitsOnOrder (and so violated one of our business rules).  You might ask – how did ASP.NET MVC know to highlight and display the error message next to the UnitsOnOrder textbox?  It did this because ASP.NET MVC 3 now honors the IValidatableObject interface when performing model binding, and will retrieve the error messages from validation failures with it. The business rule within our Product model class indicated that the “UnitsOnOrder” property should be highlighted when the business rule we hit was violated: Our Html.ValidationMessageFor() helper method knew to display the business rule error message (next to the UnitsOnOrder edit box) because of the above property name hint we supplied: Keeping things DRY ASP.NET MVC and EF Code First enables you to keep your validation and business rules in one place (within your model layer), and avoid having it creep into your Controllers and Views.  Keeping the validation logic in the model layer helps ensure that you do not duplicate validation/business logic as you add more Controllers and Views to your application.  It allows you to quickly change your business rules/validation logic in one single place (within your model layer) – and have all controllers/views across your application immediately reflect it.  This help keep your application code clean and easily maintainable, and makes it much easier to evolve and update your application in the future. Summary EF Code First (starting with CTP5) now has built-in support for both DataAnnotations and the IValidatableObject interface.  This allows you to easily add validation and business rules to your models, and have EF automatically ensure that they are enforced anytime someone tries to persist changes of them to a database.  ASP.NET MVC 3 also now supports both DataAnnotations and IValidatableObject as well, which makes it even easier to use them with your EF Code First model layer – and then have the controllers/views within your web layer automatically honor and support them as well.  This makes it easy to build clean and highly maintainable applications. You don’t have to use DataAnnotations or IValidatableObject to perform your validation/business logic.  You can always roll your own custom validation architecture and/or use other more advanced validation frameworks/patterns if you want.  But for a lot of applications this built-in support will probably be sufficient – and provide a highly productive way to build solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Multi-tenant ASP.NET MVC - Views

    - by zowens
    Part I – Introduction Part II – Foundation Part III – Controllers   So far we have covered the basic premise of tenants and how they will be delegated. Now comes a big issue with multi-tenancy, the views. In some applications, you will not have to override views for each tenant. However, one of my requirements is to add extra views (and controller actions) along with overriding views from the core structure. This presents a bit of a problem in locating views for each tenant request. I have chosen quite an opinionated approach at the present but will coming back to the “views” issue in a later post. What’s the deal? The path I’ve chosen is to use precompiled Spark views. I really love Spark View Engine and was planning on using it in my project anyways. However, I ran across a really neat aspect of the source when I was having a look under the hood. There’s an easy way to hook in embedded views from your project. There are solutions that provide this, but they implement a special Virtual Path Provider. While I think this is a great solution, I would rather just have Spark take care of the view resolution. The magic actually happens during the compilation of the views into a bin-deployable DLL. After the views are compiled, the are simply pulled out of the views DLL. Each tenant has its own views DLL that just has “.Views” appended after the assembly name as a convention. The list of reasons for this approach are quite long. The primary motivation is performance. I’ve had quite a few performance issues in the past and I would like to increase my application’s performance in any way that I can. My customized build of Spark removes insignificant whitespace from the HTML output so I can some some bandwidth and load time without having to deal with whitespace removal at runtime.   How to setup Tenants for the Host In the source, I’ve provided a single tenant as a sample (Sample1). This will serve as a template for subsequent tenants in your application. The first step is to add a “PostBuildStep” installer into the project. I’ve defined one in the source that will eventually change as we focus more on the construction of dependency containers. The next step is to tell the project to run the installer and copy the DLL output to a folder in the host that will pick up as a tenant. Here’s the code that will achieve it (this belongs in Post-build event command line field in the Build Events tab of settings) %systemroot%\Microsoft.NET\Framework\v4.0.30319\installutil "$(TargetPath)" copy /Y "$(TargetDir)$(TargetName)*.dll" "$(SolutionDir)Web\Tenants\" copy /Y "$(TargetDir)$(TargetName)*.pdb" "$(SolutionDir)Web\Tenants\" The DLLs with a name starting with the target assembly name will be copied to the “Tenants” folder in the web project. This means something like MultiTenancy.Tenants.Sample1.dll and MultiTenancy.Tenants.Sample1.Views.dll will both be copied along with the debug symbols. This is probably the simplest way to go about this, but it is a tad inflexible. For example, what if you have dependencies? The preferred method would probably be to use IL Merge to merge your dependencies with your target DLL. This would have to be added in the build events. Another way to achieve that would be to simply bypass Visual Studio events and use MSBuild.   I also got a question about how I was setting up the controller factory. Here’s the basics on how I’m setting up tenants inside the host (Global.asax) protected void Application_Start() { RegisterRoutes(RouteTable.Routes); // create a container just to pull in tenants var topContainer = new Container(); topContainer.Configure(config => { config.Scan(scanner => { scanner.AssembliesFromPath(Path.Combine(Server.MapPath("~/"), "Tenants")); scanner.AddAllTypesOf<IApplicationTenant>(); }); }); // create selectors var tenantSelector = new DefaultTenantSelector(topContainer.GetAllInstances<IApplicationTenant>()); var containerSelector = new TenantContainerResolver(tenantSelector); // clear view engines, we don't want anything other than spark ViewEngines.Engines.Clear(); // set view engine ViewEngines.Engines.Add(new TenantViewEngine(tenantSelector)); // set controller factory ControllerBuilder.Current.SetControllerFactory(new ContainerControllerFactory(containerSelector)); } The code to setup the tenants isn’t actually that hard. I’m utilizing assembly scanners in StructureMap as a simple way to pull in DLLs that are not in the AppDomain. Remember that there is a dependency on the host in the tenants and a tenant cannot simply be referenced by a host because of circular dependencies.   Tenant View Engine TenantViewEngine is a simple delegator to the tenant’s specified view engine. You might have noticed that a tenant has to define a view engine. public interface IApplicationTenant { .... IViewEngine ViewEngine { get; } } The trick comes in specifying the view engine on the tenant side. Here’s some of the code that will pull views from the DLL. protected virtual IViewEngine DetermineViewEngine() { var factory = new SparkViewFactory(); var file = GetType().Assembly.CodeBase.Without("file:///").Replace(".dll", ".Views.dll").Replace('/', '\\'); var assembly = Assembly.LoadFile(file); factory.Engine.LoadBatchCompilation(assembly); return factory; } This code resides in an abstract Tenant where the fields are setup in the constructor. This method (inside the abstract class) will load the Views assembly and load the compilation into Spark’s “Descriptors” that will be used to determine views. There is some trickery on determining the file location… but it works just fine.   Up Next There’s just a few big things left such as StructureMap configuring controllers with a convention instead of specifying types directly with container construction and content resolution. I will also try to find a way to use the Web Forms View Engine in a multi-tenant way we achieved with the Spark View Engine without using a virtual path provider. I will probably not use the Web Forms View Engine personally, but I’m sure some people would prefer using WebForms because of the maturity of the engine. As always, I love to take questions by email or on twitter. Suggestions are always welcome as well! (Oh, and here’s another link to the source code).

    Read the article

  • .NET 4.5 is an in-place replacement for .NET 4.0

    - by Rick Strahl
    With the betas for .NET 4.5 and Visual Studio 11 and Windows 8 shipping many people will be installing .NET 4.5 and hacking away on it. There are a number of great enhancements that are fairly transparent, but it's important to understand what .NET 4.5 actually is in terms of the CLR running on your machine. When .NET 4.5 is installed it effectively replaces .NET 4.0 on the machine. .NET 4.0 gets overwritten by a new version of .NET 4.5 which - according to Microsoft - is supposed to be 100% backwards compatible. While 100% backwards compatible sounds great, we all know that 100% is a hard number to hit, and even the aforementioned blog post at the Microsoft site acknowledges this. But there's so much more than backwards compatibility that makes this awkward at best and confusing at worst. What does ‘Replacement’ mean? When you install .NET 4.5 your .NET 4.0 assemblies in the \Windows\.NET Framework\V4.0.30319 are overwritten with a new set of assemblies. You end up with overwritten assemblies as well as a bunch of new ones (like the new System.Net.Http assemblies for example). The following screen shot demonstrates system.dll on my test machine (left) running .NET 4.5 on the right and my production laptop running stock .NET 4.0 (right):   Clearly they are different files with a difference in file sizes (interesting that the 4.5 version is actually smaller). That’s not all. If you actually query the runtime version when .NET 4.5 is installed with with Environment.Version you still get: 4.0.30319 If you open the properties of System.dll assembly in .NET 4.5 you'll also see: Notice that the file version is also left at 4.0.xxx. There are differences in build numbers: .NET 4.0 shows 261 and the current .NET 4.5 beta build is 17379. I suppose you can use assume a build number greater than 17000 is .NET 4.5, but that's pretty hokey to say the least. There’s no easy or obvious way to tell whether you are running on 4.0 or 4.5 – to the application they appear to be the same runtime version. And that is what Microsoft intends here. .NET 4.5 is intended as an in-place upgrade. Compile to 4.5 run on 4.0 – not quite! You can compile an application for .NET 4.5 and run it on the 4.0 runtime – that is until you hit a new feature that doesn’t exist on 4.0. At which point the app bombs at runtime. Say you write some code that is mostly .NET 4.0, but only has a few of the new features of .NET 4.5 like aync/await buried deep in the bowels of the application where it only fires occasionally. .NET will happily start your application and run everything 4.0 fine, until it hits that 4.5 code – and then crash unceremoniously at runtime. Oh joy! You can .NET 4.0 applications on .NET 4.5 of course and that should work without much fanfare. Different than .NET 3.0/3.5 Note that this in-place replacement is very different from the side by side installs of .NET 2.0 and 3.0/3.5 which all ran on the 2.0 version of the CLR. The two 3.x versions were basically library enhancements on top of the core .NET 2.0 runtime. Both versions ran under the .NET 2.0 runtime which wasn’t changed (other than for security patches and bug fixes) for the whole 3.x cycle. The 4.5 update instead completely replaces the .NET 4.0 runtime and leaves the actual version number set at v4.0.30319. When you build a new project with Visual Studio 2011, you can still target .NET 4.0 or you can target .NET 4.5. But you are in effect referencing the same set of assemblies for both regardless which version you use. What's different is the compiler used to compile and link your code so compiling with .NET 4.0 gives you just the subset of the functionality that is available in .NET 4.0, but when you use the 4.5 compiler you get the full functionality of what’s actually available in the assemblies and extra libraries. It doesn’t look like you will be able to use Visual Studio 2010 to develop .NET 4.5 applications. Good news – Bad news Microsoft is trying hard to experiment with every possible permutation of releasing new versions of the .NET framework apparently. No two updates have been the same. Clearly updating to a full new version of .NET (ie. .NET 2.0, 4.0 and at some point 5.0 runtimes) has its own set of challenges, but doing an in-place update of the runtime and then not even providing a good way to tell which version is installed is pretty whacky even by Microsoft’s standards. Especially given that .NET 4.5 includes a fairly significant update with all the aysnc functionality baked into the runtime. Most of the IO APIs have been updated to support task based async operation which significantly affects many existing APIs. To make things worse .NET 4.5 will be the initial version of .NET that ships with Windows 8 so it will be with us for a long time to come unless Microsoft finally decides to push .NET versions onto Windows machines as part of system upgrades (which currently doesn’t happen). This is the same story we had when Vista launched with .NET 3.0 which was a minor version that quickly was replaced by 3.5 which was more long lived and practical. People had enough problems dealing with the confusing versioning of the 3.x versions which ran on .NET 2.0. I can’t count the amount support calls and questions I’ve fielded because people couldn’t find a .NET 3.5 entry in the IIS version dialog. The same is likely to happen with .NET 4.5. It’s all well and good when we know that .NET 4.5 is an in-place replacement, but administrators and IT folks not intimately familiar with .NET are unlikely to understand this nuance and end up thoroughly confused which version is installed. It’s hard for me to see any upside to an in-place update and I haven’t really seen a good explanation of why this approach was decided on. Sure if the version stays the same existing assembly bindings don’t break so applications can stay running through an update. I suppose this is useful for some component vendors and strongly signed assemblies in corporate environments. But seriously, if you are going to throw .NET 4.5 into the mix, who won’t be recompiling all code and thoroughly test that code to work on .NET 4.5? A recompile requirement doesn’t seem that serious in light of a major version upgrade.  Resources http://blogs.msdn.com/b/dotnet/archive/2011/09/26/compatibility-of-net-framework-4-5.aspx http://www.devproconnections.com/article/net-framework/net-framework-45-versioning-faces-problems-141160© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to debug manifest errors?

    - by Rryk
    I am creating an application that depends on third-party library, which in turn depends on MSVCP90D.dll (it was compiled with debug symbols). While starting the application it fails to start and provides an error message: I have found such library in C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\Debug_NonRedist\amd64\Microsoft.VC90.DebugCRT and C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\Debug_NonRedist\x86\Microsoft.VC90.DebugCRT. As you can see one of them is 64-bit, while the other is 32-bit. When I have placed 32-bit into the directory of the application the application silently crashes while loading (log from Visual Studio Output window is below). With the 32-bit one I get another error message: If I press Abort -- programs shuts down, Retry results in breaking into debug session for crt0msg.c file. This is system file and I have no idea how to debug it. If I press Ignore I get yet another error message: So the question is how to debug such errors? Please give me some links where I can read more about it or point me out what exactly I should do in such cases. I know this relates to manifest problems -- please give me a good resource where I can read about resources, since what I have found have confused me even more. This is log for 64-bit version of the MSVCP90D.dll library: 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.exe', Symbols loaded. 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ntdll.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\kernel32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\KernelBase.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\user32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\gdi32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\lpk.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\usp10.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\msvcrt.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\advapi32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\sechost.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\rpcrt4.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\sspicli.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\cryptbase.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\shell32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\shlwapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\winmm.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\version.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\psapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\imm32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\msctf.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.dll', Symbols loaded. 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\oleaut32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16385_none_421189da2b7fabfc\comctl32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\opengl32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\glu32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ddraw.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\dciman32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\setupapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\cfgmgr32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\devobj.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\dwmapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\secur32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\rtsg2\bin\RTSG2.dll', Symbols loaded. 'chrome.exe': Unloaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.dll' 'chrome.exe': Unloaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\rtsg2\bin\RTSG2.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\secur32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\opengl32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\ddraw.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\dwmapi.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\setupapi.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\devobj.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\cfgmgr32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\dciman32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\glu32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\oleacc.dll' 'chrome.exe': Unloaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16385_none_421189da2b7fabfc\comctl32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\oleaut32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\ole32.dll' 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Symbols loaded (source information stripped). The program '[1152] chrome.exe: Native' has exited with code 9 (0x9).

    Read the article

  • CodePlex Daily Summary for Sunday, May 30, 2010

    CodePlex Daily Summary for Sunday, May 30, 2010New ProjectsAviva Solutions C# Coding Guidelines: A set of C# coding guidelines, coding standards, layout rules, FxCop rulesets and (upcoming) custom FxCop and StyleCop rules for improving the over...BKWork: private project.classbook: du an trong vong 10 ngay. Nhom (Do Bao Linh, Phan Thanh Tai, Nguyen Dang Loc)Du an - 01: Du an mon thay LuongEndNote助手: EndNote Helper is a assistant tool for EndNote, which make your task of reference management more convinent. EndNote 助手是一个用于辅助EndNote进行文献管理的小工具,它可...Evoucher: Simple Evoucher Sales SystemFiddler Delayed Responses Extension: A fiddler extension that help developers delay the delivery of HTML Responses to applications. Some delay user stories: - Delivery of css to HTML ...Generic Entity Model 2: GEM2 is lightweight entity framework for building custom business solutions. It enables rapid approach to entity logic design, while offering out o...GY06: 这个是长大工院于2009年发起的项目,因为种种原因没有完成。JoshDOS: JoshDOS is a command line operating system kernel based off COSMOS. It can be booted from actual hardware and built in Visual Studio using .NET la...PhysicsFMUDeluxe.NET: Este projeto é desenvolvido com o intuíto de treinar o desenvolvimento de aplicações em C#. Ele contém ferramentas para cálculos de física e matemá...Reactor Services Platform: Reactor is a service composition and deployment grid that streamlines developing, composing, deploying and managing services. Reactor Services are ...SerafinApartment: Simple MVC 2 application for apartment rental. It is multingual booking system, that allows users to register, book and subscribe for notifications...Silverlight Audio Effect Box: This is a C# Silverlight 4 sample application which process audio sample in near real time. It allows to capture the default audio input device and...Smart Voice: Smart Voice let's you control Skype using your voice. It allows you to write messages, issue phone calls, etc. This application was developed think...WebDotNet - The minimalist web framework inspired by web.py: WebDotNet is an experiment in web frameworks. Inspired by the python web framework, web.py, it is an exercise in extreme minimalism in a framework...New ReleasesAcies: Acies - Alpha Build 0.0.7: Alpha release. Requires Microsoft XNA Framework Redistributable 3.1 (http://www.microsoft.com/downloads/details.aspx?FamilyID=53867a2a-e249-4560-...AdventureWorksLT 2008 Sample Database Script: AdventureWorksLT 2008 R2 DB Script: This script is based on latest download from the Sample database for SQL Server 2008 R2. The original download from the sample is approx 84 MB and ...ASP.NET Wiki Control: Release 1.3.1: - Removed ASP.NET Session dependency. BreadCrumbs will now work with sessions disabled. Can now also share a URL and have the breadcrumb appropriat...Aviva Solutions C# Coding Guidelines: Visual Studio 2010 Rule Sets: Rule Sets targetting different styles of projects.bvcms - Bellevue Church Management System: Source: This source was used to build the latest {church}.bvcms.comCC.Yacht: CC.Yacht 1.0.10.529: This is the initial release of CC.Yacht. Marked as beta since I don't have any testing/feedback beyond that of myself and my wife.Community Forums NNTP bridge: Community Forums NNTP Bridge V14: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V15: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Coronasoft Cryostasis scripting engine: CCSE v0.0.1.0 BETA: This is the 0.0.1.0 Beta Release. If you find any bugs Post them as a comment or in the Discussions tabEndNote助手: EndNote助手2.1.0..0: 去除了注册验证机制。Fiddler Delayed Responses Extension: v0.1: Version 0.1 of Fiddler Delayed Responses Extension. See ChangeLog for more information.Generic Entity Model 2: GEM2 build 52510: This is first BETA release of GEM2! Following implementation is still missing from initial plan: Detailed documentation MySQL operational databa...JoshDOS: JoshDOS Souce: This is the souce for the JoshDOS 1.0 OS kernel. You need the COSMOS user kit to use.JoshDOS: Shell Version 1.0: Whats in this download *JoshDOS user kit *JoshDOS VStudio starter kit *JoshDOS documentation Note: You will need the COSMOS user kit to start deve...miniTodo: mini Todo version 0.3: Todo完了時に音が出なかったのを修正Model Virtual Casting - ASPItalia.com: Model Virtual Casting 0.2: Model Virtual Casting 0.2Questa seconda release di ModelVC è corrispondente a quella mostrata in occasione della Real Code Conference 4.0 tenutasi ...PhysicsFMUDeluxe.NET: PhysicsFMUDeluxe.NET - Setup: Primeira versão pública do PhysicsFMUDeluxe.NETSilverlight Audio Effect Box: Echo Box 1.0: First realease - zip contains : web page + xap file :Smart Voice: Smart Voice 0.1: Here is the first alpha release of Smart Voice. Please remember this was done for a curricular unit project at my university and i understand that ...StyleCop Contrib: Custom rules v0.2: This release of the custom rules target StyleCop 4.3.3. Included rules are: Spacing Rules - NoTrailingWhiteSpace - IndentUsingTabs Ordering Rules ...System.ComponentModel.DataAnnotations Contrib: 0.2.46280.0: Built with Visual Studio 2010/.Net 4.0. Compiled from source code changeset 46280.VB Styler: VB Styler Suite V 1.3.0.0: This is the newest version of the VB Styler. Here are the new features. New Imaging ColorPicker A template for getting colors via sliders ColorR...VCC: Latest build, v2.1.30529.0: Automatic drop of latest buildWPF Application Framework (WAF): WPF Application Framework (WAF) 1.0.0.90 RC: Version: 1.0.0.90 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. ...XNA Collision Detection: XNA Collision Detection Sample Program: I have coded a compact program which shows the collision detection working, and provides a camera class for rendering and moving the "player." Agai...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise LibraryCommunity Forums NNTP bridgeBlogEngine.NETGMap.NET - Great Maps for Windows Forms & PresentationIonics Isapi Rewrite FilterRawrCustomer Portal Accelerator for Microsoft Dynamics CRMFacebook Developer ToolkitPAP

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >