Search Results

Search found 26256 results on 1051 pages for 'information science'.

Page 184/1051 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Problem with setup VPN in Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • Permissions problem running Apache ActiveMQ

    - by Edd
    I'm wanting to use Apache ActiveMQ on Ubuntu 12.04 LTS, but am running into what looks like a permissions problem when I try to run it as follows: edd:~$ sudo activemq --version INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' INFO: changing to user 'activemq' to invoke java Java Runtime: Sun Microsystems Inc. 1.6.0_24 /usr/lib/jvm/java-6-openjdk-amd64/jre Heap sizes: current=502464k free=499842k max=502464k JVM args: -Xms512M -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Dactivemq.classpath=/var/lib/activemq//conf;; -Dactivemq.home=/usr/share/activemq -Dactivemq.base=/var/lib/activemq/ ACTIVEMQ_HOME: /usr/share/activemq ACTIVEMQ_BASE: /var/lib/activemq ActiveMQ 5.5.0 For help or more information please see: http://activemq.apache.org edd:~$ sudo activemq start INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details INFO: changing to user 'activemq' to invoke java -su: line 2: /var/run/activemq.pid: Permission denied INFO: pidfile created : '/var/run/activemq.pid' (pid '7811') edd:~$ sudo activemq status INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' ActiveMQ not running edd:~$ ps ax | grep 'activemq' 8040 pts/0 S+ 0:00 grep --color=auto activemq I installed ActiveMQ using sudo apt-get install activemq. Apologies if there's any additional information missing - I'm fairly new to Linux as you may well have guessed!

    Read the article

  • SQLAuthority News – Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 – Microsoft Whitepaper

    - by pinaldave
    I recently presented session on Statistics and Best Practices in Virtual Tech Days on Nov 22, 2010. The sessions was very popular and I got many questions right after the sessions. The number question I had received was where everybody can get the further information. I am very much happy that my sessions created some curiosity for one of the most important feature of the SQL Server. Statistics are the heart of the SQL Server. Microsoft has published a white paper on the subject how statistics are useful to Query Optimizer. Here is the abstract of the same white paper from Microsoft. Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 Writer: Eric N. Hanson and Yavor Angelov Microsoft SQL Server 2008 collects statistical information about indexes and column data stored in the database. These statistics are used by the SQL Server query optimizer to choose the most efficient plan for retrieving or updating data. This paper describes what data is collected, where it is stored, and which commands create, update, and delete statistics. By default, SQL Server 2008 also creates and updates statistics automatically, when such an operation is considered to be useful. This paper also outlines how these defaults can be changed on different levels (column, table, and database). In addition, it presents how certain query language features, such as Transact-SQL variables, interact with use of statistics by the optimizer, and it provides guidance for using these features when writing queries so you can obtain good query performance. Link to white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 ?Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • From the Tips Box: Halting Autorun, Android’s Power Strip, and Secure DVD Wiping

    - by Jason Fitzpatrick
    This week we’re kicking off a new series here at How-To Geek focused on awesome reader tips. This week we’re exploring Windows shortcuts, Android widgets, and sparktacular ways to erase digital media. Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron

    Read the article

  • Installation of SAP Web Application Server 6.20 fails

    - by karthikeyan b
    I tried installing a trial version of SAP NetWeaver 7.1 on Windows Vista on my laptop but I couldn't succeed. Then I tried installing SAP Web Application Server 6.20. The installation goes all the way to 91%, at which point I get some errors and the installation stops. Does anyone have any experience with this? Here's the complete error log below: nfo: INSTGUI.EXE Protocol version is 10. Message checksum is 7613888. Info: INSTGUI MessageFile Start message loading... Info: INSTGUI MessageFile Finished message loading. Info: InstController Prepare {} {} R3SETUP Version: Apr 24 2002 Info: InstController Prepare {} {} Logfile will be set to E:\R3SETUP\BSP.log Check E:\R3SETUP\BSP.log for further messages. Info: CommandFileController SyFileVersionSave {} {} Saving original content of file E:\R3SETUP\BSP.R3S ... Warning: CommandFileController SyFileCopy {} {} Function CopyFile() failed at location SyFileCopy-681 Warning: CommandFileController SyFileCopy {} {} errno: 5: Access is denied. Warning: CommandFileController SyFileCreateWithPermissions {} {} errno: 13: Permission denied Warning: CommandFileController SyPermissionSet {} {} Function SetNamedSecurityInfo() failed for E:\R3SETUP\BSP.R3S at location SyPermissionSet-2484 Warning: CommandFileController SyPermissionSet {} {} errno: 5: Access is denied. Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 309 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 309 CommandFile could not be updated Info: CDSERVERBASE InternalColdKeyCheck 2 309 The CD KERNEL will not be copied. Info: CDSERVERBASE InternalColdKeyCheck 2 309 The CD DATA1 will not be copied. Info: CDSERVERBASE InternalColdKeyCheck 2 309 The CD DATA2 will not be copied. Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Error: CommandFileController StoreMasterTableFromCommandFile 2 99 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 99 CommandFile could not be updated Warning: ADADBINSTANCE_IND_ADA GetConfirmationFor 2 58 Cleanup database instance BSP for new installation. Error: CommandFileController StoreMasterTableFromCommandFile 2 58 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 58 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1206 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1206 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 75 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 75 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 247 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 247 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 120 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 120 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 242 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 242 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 815 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 815 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 223 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 223 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 10 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 10 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1267 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1267 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1111 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1111 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1122 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1122 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1114 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1114 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 54 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 54 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1146 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1146 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 718 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 718 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 333 Requesting Installation Details Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: CDSERVERBASE InternalWarmKeyCheck 2 309 The CD KERNEL will not be copied. Info: CDSERVERBASE InternalWarmKeyCheck 2 309 The CD DATA1 will not be copied. Info: CDSERVERBASE InternalWarmKeyCheck 2 309 The CD DATA2 will not be copied. Info: InstController MakeStepsDeliver 2 309 Requesting Information on CD-ROMs Error: CommandFileController StoreMasterTableFromCommandFile 2 309 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 309 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 309 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 309 CommandFile could not be updated Info: CENTRDBINSTANCE_NT_ADA InternalWarmKeyCheck 2 333 The installation phase is starting now. Please look in the log file for further information about current actions. Info: InstController MakeStepsDeliver 2 333 Requesting Installation Details Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 99 Defining Key Values Error: CommandFileController StoreMasterTableFromCommandFile 2 99 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 99 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 99 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 99 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 58 Requesting Setup Details Error: CommandFileController StoreMasterTableFromCommandFile 2 58 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 58 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 58 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 58 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 333 Requesting Installation Details Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1206 Setting Users for Single DB landscape Error: CommandFileController StoreMasterTableFromCommandFile 2 1206 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1206 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1206 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1206 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 75 Stopping the SAP DB Instance Error: CommandFileController StoreMasterTableFromCommandFile 2 75 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 75 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 75 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 75 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 247 Stopping the SAP DB remote server Error: CommandFileController StoreMasterTableFromCommandFile 2 247 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 247 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 247 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 247 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 1195 Can not read from Z:. Please ensure this path is accessible. Info: LvKeyRequest For further information see HTML documentation: step: SAPDBSETCDPATH_IND_IND and key: KERNEL_LOCATION Info: InstController MakeStepsDeliver 2 1195 Installing SAP DB Software Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1195 Installing SAP DB Software Info: SAPDBINSTALL_IND_ADA SyCoprocessCreate 2 1195 Creating coprocess E:\\sapdb\NT\I386\sdbinst.exe ... Info: SAPDBINSTALL_IND_ADA ExecuteDo 2 1195 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 120 Extracting SAP DB Tools Software Info: ADAEXTRACTLCTOOLS_NT_ADA SyCoprocessCreate 2 120 Creating coprocess SAPCAR ... Info: ADAEXTRACTLCTOOLS_NT_ADA SyCoprocessCreate 2 120 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 120 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 120 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 120 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 120 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: ADAEXTRACTBSPCFG SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: ADAEXTRACTBSPCFG SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 242 Starting VSERVER Info: ADAXSERVER_NT_ADA SyCoprocessCreate 2 242 Creating coprocess C:\sapdb\programs\bin\x_server.exe ... Info: ADAXSERVER_NT_ADA ExecuteDo 2 242 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 242 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 242 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 242 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 242 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: EXTRACTSAPEXEDBDATA1 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXEDBDATA1 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 816 Can not read from Y:. Please ensure this path is accessible. Info: LvKeyRequest For further information see HTML documentation: step: EXTRACTSAPEXEDBDATA2 and key: DATA1_LOCATION Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: EXTRACTSAPEXEDBDATA2 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXEDBDATA2 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 816 Can not read from X:. Please ensure this path is accessible. Info: LvKeyRequest For further information see HTML documentation: step: EXTRACTSAPEXEDBDATA3 and key: DATA2_LOCATION Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: EXTRACTSAPEXEDBDATA3 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXEDBDATA3 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 815 We tried to find the label SAPDB:MINI-WAS-DEMO:620:KERNEL for CD KERNEL in path E:\. But the check was not successfull. Info: LvKeyRequest For further information see HTML documentation: step: EXTRACTSAPEXE and key: KERNEL_LOCATION Info: InstController MakeStepsDeliver 2 815 Extracting the SAP Executables Info: EXTRACTSAPEXE SyCoprocessCreate 2 815 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXE SyCoprocessCreate 2 815 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 815 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 815 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 815 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 815 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 223 Setting new Rundirectory. Info: ADADBREGISTER_IND_ADA SyCoprocessCreate 2 223 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADADBREGISTER_IND_ADA ExecuteDo 2 223 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 223 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 223 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 223 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 223 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1 ADASETDEVSPACES_IND_ADA Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 10 Performing Service BCHECK Error: CommandFileController StoreMasterTableFromCommandFile 2 10 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 10 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 10 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 10 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 263 Creating XUSER File for the User ADM for Dialog Instance Info: ADAXUSERSIDADM_DEFAULT_NT_ADA SyCoprocessCreate 2 263 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADAXUSERSIDADM_DEFAULT_NT_ADA ExecuteDo 2 263 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 263 Creating XUSER File for the User ADM for Dialog Instance Info: ADAXUSERSIDADM_COLD_NT_ADA SyCoprocessCreate 2 263 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADAXUSERSIDADM_COLD_NT_ADA ExecuteDo 2 263 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 263 Creating XUSER File for the User ADM for Dialog Instance Info: ADAXUSERSIDADM_WARM_NT_ADA SyCoprocessCreate 2 263 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADAXUSERSIDADM_WARM_NT_ADA ExecuteDo 2 263 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 760 Creating the Default Profile Info: DEFAULTPROFILE_IND_IND SyFileVersionSave 2 760 Saving original content of file C:\MiniWAS\DEFAULT.PFL ... Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1267 Modifying or Creating the TPPARAM File Info: TPPARAMMODIFY_NT_ADA SyFileVersionSave 2 1267 Saving original content of file C:\MiniWAS\trans\bin\TPPARAM ... Error: CommandFileController StoreMasterTableFromCommandFile 2 1267 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1267 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1267 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1267 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1111 Creating the Service Entry for the Dispatcher Info: R3DISPATCHERPORT_IND_IND IaServicePortAppend 2 1111 Checking service name sapdp00, protocol tcp, port number 3200 ... Info: R3DISPATCHERPORT_IND_IND IaServicePortAppend 2 1111 Port name sapdp00 is known and the port number 3200 is equal to the existing port number 3200 Error: CommandFileController StoreMasterTableFromCommandFile 2 1111 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1111 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1111 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1111 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1122 Creating the Service Entry for the Message Server Info: R3MESSAGEPORT_IND_IND IaServicePortAppend 2 1122 Checking service name sapmsBSP, protocol tcp, port number 3600 ... Info: R3MESSAGEPORT_IND_IND IaServicePortAppend 2 1122 Port name sapmsBSP is known and the port number 3600 is equal to the existing port number 3600 Error: CommandFileController StoreMasterTableFromCommandFile 2 1122 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1122 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1122 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1122 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1114 Creating the Service Entry for the Gateway Service Info: R3GATEWAYPORT_IND_IND IaServicePortAppend 2 1114 Checking service name sapgw00, protocol tcp, port number 3300 ... Info: R3GATEWAYPORT_IND_IND IaServicePortAppend 2 1114 Port name sapgw00 is known and the port number 3300 is equal to the existing port number 3300 Error: CommandFileController StoreMasterTableFromCommandFile 2 1114 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1114 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1114 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1114 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1 PISTARTBSP Info: PISTARTBSP SyDirCreate 2 1 Checking existence of directory C:\Users\Karthikeyan\Desktop\. If it does not exist creating it with user , group and permission 0 ... Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1 PISTARTBSPGROUP Info: PISTARTBSPGROUP SyDirCreate 2 1 Checking existence of directory C:\Users\Karthikeyan\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Mini SAP Web Application Server. If it does not exist creating it with user , group and permission 0 ...

    Read the article

  • Issue 15: Oracle PartnerNetwork Exchange @ Oracle OpenWorld

    - by rituchhibber
         ORACLE FOCUS Oracle PartnerNetwork Exchange@ ORACLE OpenWorld Sylvie MichouSenior DirectorPartner Marketing & Communications and Strategic Programs RESOURCES -- Oracle OpenWorld 2012 Oracle PartnerNetwork Exchange @ OpenWorld Oracle PartnerNetwork Exchange @ OpenWorld Registration Oracle PartnerNetwork Exchange SpecializationTest Fest Oracle OpenWorld Schedule Builder Oracle OpenWorld Promotional Toolkit for Partners Oracle Partner Events Oracle Partner Webcasts Oracle EMEA Partner News SUBSCRIBE FEEDBACK PREVIOUS ISSUES If you are attending our forthcoming Oracle OpenWorld 2012 conference in San Francisco from 30 September to 4 October, you will discover a new dedicated programme of keynotes and sessions tailored especially for you, our valued partners. Oracle PartnerNetwork Exchange @ OpenWorld has been created to enhance the opportunities for you to learn from and network with Oracle executives and experts. The programme also provides more informal opportunities than ever throughout the week to meet up with the people who are most important to your business: customers, prospects, colleagues and the Oracle EMEA Alliances & Channels management team. Oracle remains fully focused on building the industry's most admired partner ecosystem—which today spans over 25,000 partners. This new OPN Exchange programme offers an exciting change of pace for partners throughout the conference. Now it will be possible to enjoy a fully-integrated, partner-dedicated session schedule throughout the week, as well as key social events such as the Sunday night Welcome Reception, networking lunches from Monday to Thursday at the Howard Street Tent, and a fantastic closing event on the last Thursday afternoon. In addition to the regular Oracle OpenWorld conference schedule, if you have registered for the Oracle PartnerNetwork Exchange @ OpenWorld programme, you will be invited to attend a much anticipated global partner keynote presentation, plus more than 40 conference sessions aimed squarely at what's most important to you, as partners. Prominent topics for discussion will include: Oracle technologies and roadmaps and how they fit with partners' business plans; business development; regional distinctions in business practices; and much more. Each session will provide plenty of food for thought ahead of the numerous networking opportunities throughout the week, encouraging the knowledge exchange with Oracle executives, customers, prospects, and colleagues that will make this conference of even greater value for you. At Oracle we always work closely with our partners to deliver solution offerings that improve business value, simplify the IT experience and drive innovation and efficiencies for joint customers. The most important element of our new OPN Exchange is content that helps you get more from technology investments, more from your peer-to-peer connections, and more from your interactions with customers. To this end we've created some partner-specific tools which can be used by OPN members ahead of the conference itself. Crucially, a comprehensive Content Catalog already lists and organises details of every OPN Exchange session, speaker, exhibitor, demonstration and related materials. This Content Catalog can be used by all our partners to identify interesting content that you can add to your own personalised Oracle OpenWorld Schedule Builder, allowing more effective planning and pre-enrolment for vital sessions. There are numerous highlights that you will definitely want to include in those personal schedules. On Sunday morning, 30 September we will start the week with partner dedicated OPN Exchange sessions, following our Global Partner Keynote at 13:00 with Judson Althoff, SVP, Worldwide Alliances & Channels and Embedded Sales and senior executives, giving insight into Oracle's partner vision, strategy, and resources—all designed to help build and strengthen market opportunities for you. This will be followed by a number of OPN Exchange general sessions, the Oracle OpenWorld Opening Keynote with Larry Ellison, CEO, Oracle and concluded with the OPN Exchange AfterDark Welcome Reception, starting at 19:30 at the Metreon. From Monday 1 to Thursday 4 October, you can attend the OPN Exchange sessions that are most relevant to your business today and over the coming year. Oracle's top product and sales leaders will be on hand to discuss Oracle's strategic direction in 40+ targeted and in-depth sessions focussing on critical success factors to develop your business. Oracle's dedication to innovation, specialization, enablement and engineering provides Oracle partners with a huge opportunity to create new services and solutions, differentiate themselves and deliver extreme value to joint customers across the globe. Oracle will even be helping over 1000 partners to earn OPN Specialization certification during the Oracle OpenWorld OPN Exchange Test Fest, which will be providing all the study materials and exams required to drive Specialization for free at the conference. You simply need to check the list of current certification tracks available, and make sure you pre-register to reserve a seat in one of the ten sessions being offered free to OPN Exchange registered attendees. And finally, let's not forget those all-important networking opportunities, which can so often provide partners with valuable long-term alliances as well as exciting new business leads. The Oracle PartnerNetwork Lounge, located at Moscone South, exhibition hall, room 100 is the place where partners can meet formally or informally with colleagues, customers, prospects, and other industry professionals. OPN Specialized partners with OPN Exchange passes can also visit the OPN Video Blogging room to record and share ideas, and at the OPN Information Station you will find consultants available to answer your questions. "For the first time ever we will have a full partner conference within OpenWorld. OPN Exchange @ OpenWorld will kick-off on the first Sunday and run the entire week. We'll have over 40 sessions throughout that time and partners will hear from our top development executives, with special sessions dedicated to partnering throughout. It's going to be a phenomenal event, and we look forward to seeing our partners there." Judson Althoff, SVP, Oracle Worldwide Alliances & Channels and Embedded Sales So if you haven't done so already, please register for Oracle PartnerNetwork Exchange @ OpenWorld today or add OPN Exchange to your existing registration for just $100 through My Account. And if you have any further questions regarding partner activities at Oracle OpenWorld, please don't hesitate to contact the Oracle PartnerNetwork team at [email protected] will be on hand to share the very latest information about: Oracle's SPARC Superclusters: the latest Engineered Systems from Oracle, delivering radically improved performance, faster deployment and greatly reduced operational costs for mixed database and enterprise application consolidation Oracle's SPARC T4 servers: with the newly developed T4 processor and Oracle Solaris providing up to five times the single threaded performance and better overall system throughput for expanded application versatility Oracle Database Appliance: a new way to take advantage of the world's most popular database, Oracle Database 11g, in a single, easy-to-deploy and manage system. It's a complete package engineered to deliver simple, reliable and affordable database services to small and medium size businesses and departmental systems. All hardware and software components are supported together and offer customers unique pay-as-you-grow software licensing to quickly scale from two to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades Oracle Exalogic: the world's only integrated cloud machine, featuring server hardware and middleware software engineered together for maximum performance with minimum set-up and operational cost Oracle Exadata Database Machine: the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. It is a complete package of servers, storage, networking and software that is massively scalable, secure and redundant Oracle Sun ZFS Storage Appliances: providing enterprise-class NAS performance, price-performance, manageability and TCO by combining third-generation software with high-performance controllers, flash-based caches and disks Oracle Pillar Axiom Quality-of-Service: confidently consolidate storage for multiple applications into a single datacentre storage solution Oracle Solaris 11: delivering secure enterprise cloud deployments with the ability to run hundreds of virtual application with no overhead and co-engineered with other Oracle software products to provide the highest levels of security, manageability and performance Oracle Enterprise Manager 12c: Oracle's integrated enterprise IT management product, providing the industry's only complete, integrated and business-driven enterprise cloud management solution Oracle VM 3.0: the latest release of Oracle's server virtualisation and management solution, helping to move datacentres beyond server consolidation to improve application deployment and management. Register today and ensure your place at the Extreme Performance Tour! Extreme Performance Tour events are free to attend, but places are limited. To make sure that you don't miss out, please visit Oracle's Extreme Performance Tour website, select the city that you'd be interest in attending an event in, and then click on the 'Register Now' button for that city to secure your interest. Each individual city page also contains more in-depth information about your local event, including logistics, agenda and maybe even a preview of VIP guest speakers. -- Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • SQL SERVER – The Difference between Dual Core vs. Core 2 Duo

    - by pinaldave
    I have decided that I would not write on this subject until I have received a total of 25 questions on this subject. Here are a few questions from the list: Questions: What is the difference between Dual Core and Core 2 Duo? Which one is recommended for SQL Server: Core 2 Duo or Dual Core? Can I upgrade my Dual Core to Core 2 Duo? If Dual Core has 2 CPUs, how many CPUs does Core 2 Duo have? Is it true that Core 2 Duo and Dual Core meant the same thing? Well, let us see the answer. Optimistically, I would be directing everybody to this blog post if I receive a question of the same kind sometime in the future. To verify the information that I provide, visit Intel’s site. For additional information regarding the subject, visit Wikipedia. My Answer: Any computer that has two CPUs or two “cores“ is known as Dual Core. Core Duo is a brand name of Intel for Dual Core. Core 2 Duo is simply a higher version of Core Duo. (e.g. for Pentium brand, it`s like Pentium I, Pentium II, etc.) The computer I am using now has Core 2 Duo. Intel has launched a new brand, which they call i3, i5, and i7.  Here, the numbers are not related to the number of cores; rather, they show the range of the CPU. I3 is of low range and i7 is of high range. Feel free to add more details by adding valuable comments here. And if you still want to ask why I created this blog post, well, I mentioned that I was waiting for 25 questions threshold to hit, before I write about this subject which I didn`t really plan to write about. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Problem with setup VPN on Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • 6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’

    - by Gopinath
    Humans quest for exploring the surrounding planets to see whether we can live there or not is taking new shape today. NASA’s Mars probing robot, Curiosity, blasted off today on its 9 months journey to reach Mars and explore it for the possibilities of life there. Scientist says that Curiosity is one most advanced rover ever launched to probe life on other planets. Here is the launch video and some analysis by a news reporter Lets look at the 6 interesting facts about the mission 1. It’s as big as a car Curiosity is the biggest ever rover ever launched by NASA to probe life on outer planets. It’s as big as a car and almost double the size of its predecessor rover Spirit. The length of Curiosity is around 9 feet 10 inches(3 meters), width is 9 feet 1 inch (2.8 meters) and height is 7 feet (2.1 meters). 2. Powered by Plutonium – Lasts 24×7 for 23 months The earlier missions of NASA to explore Mars are powered by Solar power and that hindered capabilities of the rovers to move around when the Sun is hiding. Due to dependency of Sun the earlier rovers were not able to traverse the places where there is no Sun light. Curiosity on the other hand is equipped with a radioisotope power system that generates electricity from the heat emitted by plutonium’s radioactive decay. The plutonium weighs around 10 pounds and can generate power required for operating the rover close to 23 weeks. The best part of the new power system is, Curiosity can roam around in darkness, light and all year around. 3. Rocket powered backpack for a science fiction style landing The Curiosity is so heavy that NASA could not use parachute and balloons to air-drop the rover on the surface of Mars like it’s previous missions. They are trying out a new science fiction style air-dropping mechanism that is similar to sky crane heavy-lift helicopter. The landing of the rover begins first with entry into the Mars atmosphere protected by a heat shield. At about 6 miles to the surface, the heat shield is jettisoned and a parachute is deployed to glide the rover smoothly. When the rover touches 3 miles above the surface, the parachute is jettisoned and the eight motors rocket backpack is used for a smooth and impact free landing as shown in the image. Here is an animation created by NASA on the landing sequence. If you are interested in getting more detailed information about the landing process check this landing sequence picture available on NASA website 4. Equipped with Star Wars style laser gun Hollywood movie directors and novelist always imagined aliens coming to earth with spaceships full of laser guns and blasting the objects which comes on their way. With Curiosity the equations are going to change. It has a powerful laser gun equipped in one of it’s arms to beam laser on rocks to vaporize them. This is not part of any assault mission Curiosity is expected to carry out, the laser gun is will be used to carry out experiments to detect life and understand nature. 5. Most sophisticated laboratory powered by 10 instruments Around 10 state of art instruments are part of Curiosity rover and the these 10 instruments form a most advanced rover based lab ever built by NASA. There are instruments to cut through rocks to examine them and other instruments will search for organic compounds. Mounted cameras can study targets from a distance, arm mounted instruments can study the targets they touch. Microscopic lens attached to the arm can see and magnify tiny objects as tiny as 12.5 micro meters. 6. Rover Carrying 1.24 million names etched on silicon Early June 2009 NASA launched a campaign called “Send Your Name to Mars” and around 1.24 million people registered their names through NASA’s website. All those 1.24 million names are etched on Silicon chips mounted onto Curiosity’s deck. If you had registered your name in the campaign may be your name is going to reach Mars soon. Curiosity On Web If you wish to follow the mission here are few links to help you NASA’s Curiosity Web Page Follow Curiosity on Facebook Follow @MarsCuriosity on Twitter Artistic Gallery Image of Mars Rover Curiosity A printable sheet of Curiosity Mission [pdf] Images credit: NASA This article titled,6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How to introduce web development to non-programmers?

    - by Gulshan
    Once one of my non-programmer friends asked, "I have a cool website idea that I don't want to share. Rather I want to develop it on my own. So, I want to learn web development. Tell me what to do?" And sometimes many other people asked about how to start with web development as a profession. But they are non-programmers or not from Computer Science background. What should I suggest to them? Learning programming from the scratch? Or using CMS-like tools? Or anything else?

    Read the article

  • Start small, grow fast your SOA footprint by Edwin Biemond, Ronald van Luttikhuizen and Demed L’Her

    - by JuergenKress
    A set of pragmatic best practices for deploying a simple and sound SOA footprint that can grow with business demand. The paper contains details about Administrative considerations & Infrastructure considerations & Development considerations& Architectural considerations.  Edwin Biemond Ronald van Luttikhuizen Demed L’Her We are very interested to publish papers jointly with our partner community. Here is a list of possible SOA whitepapers that I am very interested in seeing published (note that the list is not exhaustive and I welcome any other topic you would like to volunteer). The format for these whitepapers would ideally be a 5 to 12 pages document, possibly with a companion sample (to be hosted on http://java.net/projects/oraclesoasuite11g ). It is not a marketing stuff. We will get them published on OTN, with proper credits and use social media (Twitter, Facebook, etc.) to promote them. For information, the "quickstart guide" was downloaded more than 11,000 titles over just 2 months, following a similar approach. These papers are a great way to get exposure and build your resume. We would prefer if we could get 2 people to collaborate on these papers (ideally 1 partner or customer and 1 oracle person). This guarantees some level of peer review and gives greater legitimacy to the paper. If you are Interested? Please contact Demed L’Her Thank you! SOA Partner Community For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Start small grow fast,Edwin Biemond,Ronald van Luttikhuizen,Demed L’Her,SOA Suite,Oracle,OTN,SOA Partner Community,Jürgen Kress

    Read the article

  • web application or web portal? [closed]

    - by klo
    as title said differences between those 2. I read all the definition and some articles, but I need information about some other aspects. Here is the thing. We want to build a web site that will contain: site, database, uploads, numerous background services that would have to collect information from uploads and from some other sites, parse them etc...I doubt that there are portlets that fits our specific need so we will have to make them our self. So, questions: 1. Deployment ( and difference in cost if possible), is deploying portals much more easier then web app ( java or .net) 2. Server load. Does portal consume much of server power ( and can you strip portal of thing that you do not use) 3. Implementation and developing of portlets. Can u make all the things that you could have done in java or .net? 4. General thoughts of when to use portals and when classic web app. Tnx all in advence...

    Read the article

  • What's the significance of this Google Apps error code '0x8004010B' ?

    - by Freddy
    There seems to be a lack of good information around this particular Google Apps error code. Has anyone found common environmental information that causes this error? Any kind of solution? Uninstalling/re-installing Google Sync doesn't seem to do anything, I've run Outlook's executable that scans/fixes the PST file etc. Task 'Google Apps - ::user email address:: - Sending' reported error (0x8004010B) : 'Unknown Error 0x8004010B' I found a listing of error codes for the Gdata API but nothing for these types of 'unknown' error codes. Does Google have available a list of common 'unknown' error codes/messages? Thanks in advance.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Domain registration and DNS, what am I actually paying for? [on hold]

    - by jozxyqk
    Long story short I'm quite confused as to exactly what is offered by domain registration and dns service sites. When I go to the url "http://google.com", my PC connects to a name server and gets the IP for "google.com", then connects to the IP and says, give me the page for "http://google.com". AFAIK there are many name servers and they all cache these bits of information in some hierarchical network, but ultimately a DNS record must come from a single source (not sure what this is called). There are different kinds of records, that might not an IP but an alias/redirect to other records for example. Lets say I want my own domain name for some server. Maybe it even has a static IP but I want a nicer thing for people to remember, or my ISP assigns dynamic IPs and I want a URL that always works, or my website is hosted on a shared machine so the browser needs to send "http://mydnsname.com" to the webserver to distinguish it from other requests to the same IP but for different sites. Registering a domain costs a small amount of money per year. Where does this money go, not that I'm complaining :P? Is that really all it costs to maintain the entire DNS system of nameservers? If I just register the domain and nothing else, what do I get? Is that just reserving a name or hosting WHOIS information or have I paid for a dns recrord to be hosted? Can a domain alone have a record, such as an IP or be an alias to another? A bunch of sites out there offer other services, in addition to domain registration (I'm assuming they register the domain through another party for me). One example is "dynamic DNS" (DDNS), but isn't this just a regular DNS record that's updated regularly? Does it cost extra to update more often? Without a DDNS, can a DNS record still point to an IP? I've also seen the term "managed DNS" and have no idea where that fits in.

    Read the article

  • Chennai Metro Water [Indian Websites]

    - by samsudeen
    If you are living in Chennai definitely you will have this question in your mind “Does Chennai 0survive this summer without any water crisis?”. The Chennai Metro Water maintains a website where in you can know all the details about the water storage, supply. This site has all the information about water supply  especially matters for  Chennai residents. I have given some of the uses of this site Water Sewer Application Status If you are a new resident and applied for water sewer application status. You can check your application (Water Sewer Application Status) by entering your WA No (Water Application No). You can also download all the required application forms. Register Complaints You can register complaints (Register Complaints) about  water supply / defective water meter and sewage blocking online. Pay Water & Sewage Tax You can pay your water & sewage tax (Pay Tax) online including the dues /arrears if any left out. Lake Level This site also maintains the current (Lake Level) and historic water levels of the Chennai water reservoirs including the daily inflow /out flow of water. You can also know the daily / month rainfall levels of these water reservoirs from 1965 onwards. Hope this  information little  useful for those who are planning / building a new house in Chennai corporation. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • SQLAuthority News – Guest Post – FAULT Contract in WCF with Learning Video

    - by pinaldave
    This is guest post by one of my very good friends and .NET MVP, Dhananjay Kumar. The very first impression one gets when they meet him is his politeness. He is an extremely nice person, but has superlative knowledge in .NET and is truly helpful to all of us. Objective: This article will give a basic introduction on: How to handle Exception at service side? How to use Fault contract at Service side? How to handle Service Exception at client side? A Few Points about Exception at Service Exception is technology-specific. Exception should not be shared beyond service boundary. Since Exception is technology-specific, it cannot be propagated to other clients. Exception is of many types. CLR Exception Windows32 Exception Runtime Exception at service C++ Exception Exception is very much native to the technology in which service is made. Exception must be converted from technology-specific information to natural information that can be communicated to the client. SOAP Fault FaultException<T> Service should throw FaultException<T>, instead of the usual CLR exception. FaultException<T> is a specialization of Fault Exception. Any client that programs against FaultException can handle the Exception thrown by FaultException<T>. The type parameter T conveys the error detail. T can be of any type like Exception, CLR Type or any type that can be serialized. T can be of type Data contract. T is a generic parameter that conveys the error details. You can read complete article http://dhananjaykumar.net/2010/05/23/fault-contract-in-wcf-with-learning-video/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Error 2013: Lost connection to MySQL server during query when executing CHECK TABLE FOR UPGRADE

    - by Dean Richardson
    I just upgraded Ubuntu from 11.10 to 12.04. My rails app now returns the (passenger) error "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) (Mysql2::Error)". I get a similar error when I try to access mysql at the command line on my Ubuntu server using mysql -u root -p. I have mysql-server 5.5 installed. I've checked and mysql is not running. When I try to restart it, it fails. Here are some key lines from the tail of /var/log/syslog after an attempted restart: dean@dgwjasonfried:/etc/mysql$ tail -f /var/log/syslog Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: /usr/bin/mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ... FOR UPGRADE' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: FATAL ERROR: Upgrade failed Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.assets OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.ecd_types OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5124]: Checking for insecure root accounts. Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769657] init: mysql main process (5064) terminated with status 1 Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769697] init: mysql respawning too fast, stopped Here is most of /etc/mysql/my.cnf: Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock Here is entries for some specific programs The following values assume you have at least 32M ram This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] Basic Settings user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking Instead of skip-networking the default is now to listen only on localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 And here are permissions for var/run/mysqld/mysqld.sock: srwxrwxrwx 1 mysql mysql 0 Mar 7 09:18 mysqld.sock I'd be grateful for any suggestions the community might have. I reviewed the related questions here and attempted some of the fixes offered but to no avail. Thanks! Dean Richardson Update: Thanks to quanta's suggestion, I looked at the /var/log/mysql/error.log file. I found error messages relating to pointers, fatal signals, and more stuff that I really couldn't make much sense of. I also found mysql man page references, however. One suggested that I try starting mysqld with the --innodb_force_recovery=# option, then attempt to dump (or drop) the offending/corrupted database or table. I worked through the escalating option levels one-by-one (innodb_force_recovery=1, innodb_force_recovery=2, etc.) This allowed me to successfully run mysql -u root -p from the command line and execute several commands. I was able to run queries on my production database, but any attempt to query, dump, or even drop my development database raised an error and led to me losing the connection to mysql. So I've made progress, but until I'm somehow able to drop or repair my development db I'm still unable to get my app to load. Any further advice or suggestions? Thanks! Dean Update: Right after running sudo mysqld --innodb_force_recover=1 from the command line, the error.log contains this: Right after retrying sudo mysqld --innodb_force_recover=1, The error.log file shows this: 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) Then after mysql -u root -p and mysql> drop database molex_app_development; ERROR 2013 (HY000): Lost connection to MySQL server during query mysql> the error.log contains: dean@dgwjasonfried:/var/log/mysql$ tail -f error.log /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6a3ff9ecbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6a1c004bd8): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 130308 4:58:23 [ERROR] Incorrect definition of table mysql.proc: expected column 'comment' at position 15 to have type text, found type char(64). 130308 4:58:23 InnoDB: Assertion failure in thread 140168992810752 in file fsp0fsp.c line 3639 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 10:58:23 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f7ba4f6c2f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f7ba3065e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f7ba3609039] mysqld(handle_fatal_signal+0x483)[0x7f7ba34cf9c3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f7ba2220cb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f7ba188c425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f7ba188fb8b] mysqld(+0x65e0fc)[0x7f7ba37160fc] mysqld(+0x602be6)[0x7f7ba36babe6] mysqld(+0x635006)[0x7f7ba36ed006] mysqld(+0x5d7072)[0x7f7ba368f072] mysqld(+0x5d7b9c)[0x7f7ba368fb9c] mysqld(+0x6a3348)[0x7f7ba375b348] mysqld(+0x6a3887)[0x7f7ba375b887] mysqld(+0x5c6a86)[0x7f7ba367ea86] mysqld(+0x5ae3a7)[0x7f7ba36663a7] mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0x16d)[0x7f7ba34d3ffd] mysqld(_Z23mysql_rm_table_no_locksP3THDP10TABLE_LISTbbbb+0x568)[0x7f7ba3417f78] mysqld(_Z11mysql_rm_dbP3THDPcbb+0x8aa)[0x7f7ba339780a] mysqld(_Z21mysql_execute_commandP3THD+0x394c)[0x7f7ba33b886c] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f7ba33bb28f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f7ba33bc6e0] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f7ba346119d] mysqld(handle_one_connection+0x50)[0x7f7ba3461200] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f7ba2218e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f7ba1949cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f7b7c004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. --Dean

    Read the article

  • eeePC 1001HA/1101HA max resolution when connected to external display?

    - by Marco Demaio
    Hello, I would like to buy a new Eee PC 1001HA or 1101HA. I know the max display resolution is: 1024x600 for eeePC 1001 and 1366x768 for eeePC 1101 But what's the max resolution of the graphic board when connecting these two types of eeePC to an external LCD monitor??? Let's say the external LCD monitor supports a full HD resolution of 1920x1080, are these eeePC graphic boards able to go up to such resolution??? It's really incredible to me, how such a useful information is missing everywhere on every ASUS website. Eee PC are very well suited to be connected to external monitor, so I can't believe how difficult is to find out this information. I downloaded aldo the manual, but it's not in there too. So I was hoping somone has got one and knows the answer. Thanks!

    Read the article

  • Windows Phone 7 review

    - by Jeff
    I finally got around to composing some thoughts on what I think about Windows Phone 7, and I posted those impressions on my personal blog. I'll save a few bytes and not repost it here.It should be obvious that my general impression is overwhelmingly positive. What I don't go into very deeply is how much I enjoy developing stuff for it. Baby Stopwatch was not even remotely hard to build, because it wasn't complex, but also because the platform itself is so easy to deal with. I've been messing around and building something a little more involved, and it too has been fun to work with. Sure, you have the quirks of Silverlight to work out, and then the phone-specific quirks after that, but it really is a lot of fun. If you haven't come up with a science project for the phone, I would encourage you to do so.Now if only I could find a gig here at Microsoft where people just build phone apps all day! (But not games... I know we already do that quite a bit.)

    Read the article

  • The Breakpoint with Paul Irish and Addy Osmani—Episode 2

    The Breakpoint with Paul Irish and Addy Osmani—Episode 2 Ask and vote for questions at: goo.gl Addy Osmani and the (real) Paul Irish return for the second live episode of the Breakpoint - a new show focusing on developer tooling and workflow. This week they'll be showing us brand new SASS, feature inspection and console features in the Chrome Developer Tools. If you want your to stay on the bleeding edge of tooling, you won't want to miss it. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Free Webinar Featuring Oracle Spatial and MapViewer, Oracle Business Intelligence, and Oracle Utilities

    - by stephen.garth
    Maps, BI and Network Management: Together At Last Date: Thursday, January 20 | Time: 11:00 a.m. PDT | 2:00 p.m. EDT | Duration: 1 hour Cost: FREE For years, utilities have wrestled with the challenge of providing executive management and other decision makers with maps and business intelligence during outages without compromising the performance of their real-time network operations and control systems. Join experts from Directions Media, Oracle and ThinkHuddle in this webinar for a discussion on how Oracle has addressed this challenge by incorporating Oracle Spatial data and the dashboard capabilities of Oracle Business Intelligence into a new application, Oracle Utilities Advanced Spatial Outage Analytics. Jim Steiner, Vice President of Spatial Product Management at Oracle, will provide an overview of Oracle's spatial and location technology, including Oracle Spatial 11g and Oracle Fusion Middleware MapViewer, and describe how Oracle is using this technology to spatially-enable many of its own enterprise applications. Brad Williams, Vice President of Oracle Utilities, will describe why and how the company developed Oracle Utilities Advanced Spatial Outage Analytics, how it works with Oracle Utilities Network Management System, and how this can deliver improved decision support and operational benefits to utilities. Steve Pierce, Spatial Systems Consultant with ThinkHuddle, will discuss architectural aspects and best practices in the integration of Oracle's spatial and BI technology. Following the presentation, attendees will have an opportunity to engage the panelists in a live Q&A session. Who Should Attend Executives, decision makers and analysts from IT, customer service, operations, engineering and marketing - especially in utilities, but also any business where location is important. Don't miss this webinar - Register Now. Find out more: Oracle Spatial on oracle.com More technical information on Oracle Technology Network Information on Oracle Utilities applications var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • CodePlex Daily Summary for Tuesday, March 30, 2010

    CodePlex Daily Summary for Tuesday, March 30, 2010New ProjectsCloudMail: Want to send email from Azure? Cloud Mail is designed to provide a small, effective and reliable solution for sending email from the Azure platfor...CommunityServer Extensions: Here you can find some CommunityServer extensions and bug fixes. The main goal is to provide you with the ability to correct some common problems...ContactSync: ContactSync is a set of .NET libraries, UI controls and applications for managing and synchronizing contact information. It includes managed wrapp...Dng portal: DNG Portal base on asp.net MVCDotNetNuke Referral Tracker: The Referral Tracker module allows you to save URL variables, the referring page, and the previous page into a session variable or cookie. Then, th...Foursquare for Windows Phone 7: Foursquare for Windows Phone 7.GEGetDocConfig: SharePoint utility to list information concerning document libraries in one or more sites. Displays Size, Validity, Folder, Parent, Author, Minor a...Google Maps API for .NET: Fast and lightweight client libraries for Google Maps API.kbcchina: kbc chinaLoad Test User Mock Toolkits: 用途 This project is a framework mocking the user actvities with VSTS Load Test tool to faster the test script development. 此项目包括一套模拟用户行为的通用框架,可以简化...Resonance: Resonance is a system to train neural networks, it allows to automate the train of a neural network, distributing the calculation on multiple machi...SharePoint Company Directory / Active Directory Self Service System: This is a very simple system which was designed for a Bank to allow users to update their contact information within SharePoint . Then this info ca...SmartShelf: Manage files and folders on Windows and Windows Live.sysFix: sysFix is a tool for system administrators to easily manage and fix common system errors.xnaWebcam: Webcam usage in XNA GameStudio 3.1New ReleasesAll-In-One Code Framework: All-In-One Code Framework 2010-03-29: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Samples for Azure Name Des...ARSoft.Tools.Net - C# DNS and SPF Library: 1.3.0: Added support for creating own dns server implementations. Added full IPv6 support to dns client Some performance optimizations to dns clientArtefact Animator: Artefact Animator - Silverlight 3 and WPF .Net 3.5: Artefact Animator Version 2.0.4.1 Silverlight 3 ArtefactSL.dll for both Debug and Release. WPF 3.5 Artefact.dll for both Debug and Release.BatterySaver: Version 0.4: Added support for a system tray icon for controlling the application and switching profiles (Issue)BizTalk Server 2006 Orchestration Profiler: Profiler v1.2: This is a point release of the profiler and has been updatd to work on 64 bit systems. No other new functionality is available. To use this ensure...CloudMail: CloudMail_0.5_beta: Initial public release. For documentation see http://cloudmail.codeplex.com/documentation.CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 0.08.44: See Source Code tab for recent change history.Dawf: Dual Audio Workflow: Beta 2: Fix little bugs and improve usablity by changing the way it finds the good audio.DotNetNuke Referral Tracker: 2.0.1: First releaseFoursquare for Windows Phone 7: Foursquare 2010.03.29.02: Foursquare 2010.03.29.02GEGetDocConfig: GEGETDOCCONFIG.ZIP: Installation: Simply download the zip file and extract the executable into its own directory on the SharePoint front end server Note: There will b...GKO Libraries: GKO Libraries 0.2 Beta: Added: Binary search Unmanaged wrappers, interop and pinvoke functions and structures Windows service wrapper Video mode helpers and more.....Google Maps API for .NET: GoogleMapsForNET v0.9 alpha release: First version, contains Core library featuring: Geocoding API Elevation API Static Maps APIGoogle Maps API for .NET: GoogleMapsForNET v0.9.1 alpha release: Fixed dependencies issues; added NUnit binaries and updated Newtonsoft Json library.Google Maps API for .NET: GoogleMapsForNET v0.9.2a alpha release: Recommended update.Code clean-up; did refactoring and major interface changes in Static Maps because it wasn't aligned to the 'simplest and least r...Home Access Plus+: v3.2.0.0: v3.2.0.0 Release Change Log: More AJAX to reduce page refreshes (Deleting, New Folder, Rename moved from browser popups) Only 3 Browser Popups (1...Html to OpenXml: HtmlToOpenXml 1.1: The dll library to include in your project. The dll is signed for GAC support. Compiled with .Net 3.5, Dependencies on System.Drawing.dll and Docu...Latent Semantic Analysis: Latest sources: Just the latest sources. Just click the changeset. Please note that in order to compile this code you need to download some additional code. You ...Load Test User Mock Toolkits: Load Test User Mock Toolkits Help Doc: Samples and The framework introduction. 包括框架介绍和典型示例Load Test User Mock Toolkits: Open.LoadTest.User.Mock.Toolkits 1.0 alpha: 此版本为非正式版本,未对性能方面进行优化。而且框架正在重构调整中。Mobile Broadband Logging Monitor: Mobile Broadband Logging Monitor 1.2.4: This edition supports: Newer and older editions of Birdstep Technology's EasyConnect HUAWEI Mobile Partner MWConn User defined location for s...Nito.KitchenSink: Version 3: Added Encoding.GetString(Stream, bool) for converting an entire stream into a string. Changed Stream.CopyTo to allow the stream to be closed/abor...Numina Application Framework: Numina.Framework Core 49088: Fixed Bug with Headers introduced in rev. 48249 with change to HttpUtil class. admin/User_Pending.aspx page users weren't able to be deleted Do...OAuthLib: OAuthLib (1.6.4.0): Fix for 6390 Make it possible to configure time out value.Quack Quack Says the Duck: Quack Quack Says The Duck 1.1.0.0: This new release pushes some work onto a background thread clearing issues with multiple screen clicks while the UI was blocking.Rapidshare Episode Downloader: RED v0.8.4: - Added Edit feature - Moved season & episode int to string into a separate function - Fixed some more minor issues - Added 'Previous' feature - F...RoTwee: RoTwee (8.1.3.0): Update OAuthLib to 1.6.4.0SharePoint Company Directory / Active Directory Self Service System: SharePoint Company Directory with AD Import: This is a very simple system which was designed for a Bank to allow users to update their contact information within SharePoint . Then this info ca...Simply Classified: v1.00.12: Comsite Simply Classified v1.00.12 - STABLE - Tested against DotNetNuke v4.9.5 and v5.2.x Bug Fixes/Enhancements: BUGFIX: Resolved issues with 1...sPATCH: sPatch v0.9: Completely Recoded with wxWidgetsFollowing Content is different to .NET Patcher no requirement for .NET Framework Manual patch was removed to av...SSAS Profiler Trace Scheduler: SSAS Profiler Trace Scheduler: AS Profiler Scheduler is a tool that will enable Scheduling of SQL AS Tracing using predefined Profiler Templates. For tracking different issues th...sysFix: sysfix build v5: A stable beta release, please refer to home page for further details.VOB2MKV: vob2mkv-1.0.4: This is a feature update of the VOB2MKV utility. The command-line parsing in the VOB2MKV application has been greatly improved. You can now get f...xnaWebcam: xnaWebcam 0.1: xnaWebcam 0.1 Program Version 0.1: -Show Webcam Device -Draw.String WebcamTexture.VideoDevice.Name.ToString() Instructions: 1. Plug-in your Webca...xnaWebcam: xnaWebcam 0.2: xnaWebcam 0.2 Version 0.2: -setResolution -Keys.Escape: this.Exit() << Exit the Game/Application. --- Version 0.1: -Show Webcam Device -Draw.Strin...xnaWebcam: xnaWebcam 0.21: xnaWebcam 0.2 Version 0.21: -Fix: Don't quit game/application after closing mainGameWindow -Fix: Text Position; Window.X, Window.Y --- Version 0.2...Xploit Game Engine: Xploit_1_1 Release: Added Features Multiple Mesh instancing.Xploit Game Engine: Xploit_1_1 Source Code: Updates Create multiple instances of the same Meshe using XModelMesh and XSkinnedMesh.Yakiimo3D: DX11 DirectCompute Buddhabrot Source and Binary: DX11 DirectCompute Buddhabrot/Nebulabrot source and binary.Most Popular ProjectsRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)LiveUpload to FacebookASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETLINQ to TwitterManaged Extensibility FrameworkMicrosoft Biology FoundationFarseer Physics EngineN2 CMSNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise Library

    Read the article

  • Unity 3D game idea for a fun and teaching game

    - by rasheeda
    I have been brainstorming for months now on writing a unity 3D game for my final year computer science project. I have been learning unity for sometime now but comming up with a concept is quite difficult than i thought. The game has to be really fun and also educational, one that the school and community can benefit from. I am thinking about a third person game. where the player runs round in an enviroment, picks up coins and earn points. This alone wouldn't earn me points and I have been trying to find new ideas of my own and all over the net but to no avail. Hopefully you guys can help me out with an idea, and how to make the lecturers appreciate the game and also make kids wanna play play it for a reason. Thanks.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >