Search Results

Search found 44742 results on 1790 pages for 'after create'.

Page 362/1790 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • web.xml not reloading in tomcat even after stop/start

    - by ajay
    This is in relation to:- http://stackoverflow.com/questions/2576514/basic-tomcat-servlet-error I changed my web.xml file, did ant compile , all, /etc/init.d/tomcat stop , start Even then my web.xml file in tomcat deployment is still unchanged. This is build.properties file:- app.name=hello catalina.home=/usr/local/tomcat manager.username=admin manager.password=admin This is my build.xml file. Is there something wrong with this:- <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- General purpose build script for web applications and web services, including enhanced support for deploying directly to a Tomcat 6 based server. This build script assumes that the source code of your web application is organized into the following subdirectories underneath the source code directory from which you execute the build script: docs Static documentation files to be copied to the "docs" subdirectory of your distribution. src Java source code (and associated resource files) to be compiled to the "WEB-INF/classes" subdirectory of your web applicaiton. web Static HTML, JSP, and other content (such as image files), including the WEB-INF subdirectory and its configuration file contents. $Id: build.xml.txt 562814 2007-08-05 03:52:04Z markt $ --> <!-- A "project" describes a set of targets that may be requested when Ant is executed. The "default" attribute defines the target which is executed if no specific target is requested, and the "basedir" attribute defines the current working directory from which Ant executes the requested task. This is normally set to the current working directory. --> <project name="My Project" default="compile" basedir="."> <!-- ===================== Property Definitions =========================== --> <!-- Each of the following properties are used in the build script. Values for these properties are set by the first place they are defined, from the following list: * Definitions on the "ant" command line (ant -Dfoo=bar compile). * Definitions from a "build.properties" file in the top level source directory of this application. * Definitions from a "build.properties" file in the developer's home directory. * Default definitions in this build.xml file. You will note below that property values can be composed based on the contents of previously defined properties. This is a powerful technique that helps you minimize the number of changes required when your development environment is modified. Note that property composition is allowed within "build.properties" files as well as in the "build.xml" script. --> <property file="build.properties"/> <property file="${user.home}/build.properties"/> <!-- ==================== File and Directory Names ======================== --> <!-- These properties generally define file and directory names (or paths) that affect where the build process stores its outputs. app.name Base name of this application, used to construct filenames and directories. Defaults to "myapp". app.path Context path to which this application should be deployed (defaults to "/" plus the value of the "app.name" property). app.version Version number of this iteration of the application. build.home The directory into which the "prepare" and "compile" targets will generate their output. Defaults to "build". catalina.home The directory in which you have installed a binary distribution of Tomcat 6. This will be used by the "deploy" target. dist.home The name of the base directory in which distribution files are created. Defaults to "dist". manager.password The login password of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) manager.url The URL of the "/manager" web application on the Tomcat installation to which we will deploy web applications and web services. manager.username The login username of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) --> <property name="app.name" value="myapp"/> <property name="app.path" value="/${app.name}"/> <property name="app.version" value="0.1-dev"/> <property name="build.home" value="${basedir}/build"/> <property name="catalina.home" value="../../../.."/> <!-- UPDATE THIS! --> <property name="dist.home" value="${basedir}/dist"/> <property name="docs.home" value="${basedir}/docs"/> <property name="manager.url" value="http://localhost:8080/manager"/> <property name="src.home" value="${basedir}/src"/> <property name="web.home" value="${basedir}/web"/> <!-- ==================== External Dependencies =========================== --> <!-- Use property values to define the locations of external JAR files on which your application will depend. In general, these values will be used for two purposes: * Inclusion on the classpath that is passed to the Javac compiler * Being copied into the "/WEB-INF/lib" directory during execution of the "deploy" target. Because we will automatically include all of the Java classes that Tomcat 6 exposes to web applications, we will not need to explicitly list any of those dependencies. You only need to worry about external dependencies for JAR files that you are going to include inside your "/WEB-INF/lib" directory. --> <!-- Dummy external dependency --> <!-- <property name="foo.jar" value="/path/to/foo.jar"/> --> <!-- ==================== Compilation Classpath =========================== --> <!-- Rather than relying on the CLASSPATH environment variable, Ant includes features that makes it easy to dynamically construct the classpath you need for each compilation. The example below constructs the compile classpath to include the servlet.jar file, as well as the other components that Tomcat makes available to web applications automatically, plus anything that you explicitly added. --> <path id="compile.classpath"> <!-- Include all JAR files that will be included in /WEB-INF/lib --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <!-- <pathelement location="${foo.jar}"/> --> <!-- Include all elements that Tomcat exposes to applications --> <fileset dir="${catalina.home}/bin"> <include name="*.jar"/> </fileset> <pathelement location="${catalina.home}/lib"/> <fileset dir="${catalina.home}/lib"> <include name="*.jar"/> </fileset> </path> <!-- ================== Custom Ant Task Definitions ======================= --> <!-- These properties define custom tasks for the Ant build tool that interact with the "/manager" web application installed with Tomcat 6. Before they can be successfully utilized, you must perform the following steps: - Copy the file "lib/catalina-ant.jar" from your Tomcat 6 installation into the "lib" directory of your Ant installation. - Create a "build.properties" file in your application's top-level source directory (or your user login home directory) that defines appropriate values for the "manager.password", "manager.url", and "manager.username" properties described above. For more information about the Manager web application, and the functionality of these tasks, see <http://localhost:8080/tomcat-docs/manager-howto.html>. --> <taskdef resource="org/apache/catalina/ant/catalina.tasks" classpathref="compile.classpath"/> <!-- ==================== Compilation Control Options ==================== --> <!-- These properties control option settings on the Javac compiler when it is invoked using the <javac> task. compile.debug Should compilation include the debug option? compile.deprecation Should compilation include the deprecation option? compile.optimize Should compilation include the optimize option? --> <property name="compile.debug" value="true"/> <property name="compile.deprecation" value="false"/> <property name="compile.optimize" value="true"/> <!-- ==================== All Target ====================================== --> <!-- The "all" target is a shortcut for running the "clean" target followed by the "compile" target, to force a complete recompile. --> <target name="all" depends="clean,compile" description="Clean build and dist directories, then compile"/> <!-- ==================== Clean Target ==================================== --> <!-- The "clean" target deletes any previous "build" and "dist" directory, so that you can be ensured the application can be built from scratch. --> <target name="clean" description="Delete old build and dist directories"> <delete dir="${build.home}"/> <delete dir="${dist.home}"/> </target> <!-- ==================== Compile Target ================================== --> <!-- The "compile" target transforms source files (from your "src" directory) into object files in the appropriate location in the build directory. This example assumes that you will be including your classes in an unpacked directory hierarchy under "/WEB-INF/classes". --> <target name="compile" depends="prepare" description="Compile Java sources"> <!-- Compile Java classes as necessary --> <mkdir dir="${build.home}/WEB-INF/classes"/> <javac srcdir="${src.home}" destdir="${build.home}/WEB-INF/classes" debug="${compile.debug}" deprecation="${compile.deprecation}" optimize="${compile.optimize}"> <classpath refid="compile.classpath"/> </javac> <!-- Copy application resources --> <copy todir="${build.home}/WEB-INF/classes"> <fileset dir="${src.home}" excludes="**/*.java"/> </copy> </target> <!-- ==================== Dist Target ===================================== --> <!-- The "dist" target creates a binary distribution of your application in a directory structure ready to be archived in a tar.gz or zip file. Note that this target depends on two others: * "compile" so that the entire web application (including external dependencies) will have been assembled * "javadoc" so that the application Javadocs will have been created --> <target name="dist" depends="compile,javadoc" description="Create binary distribution"> <!-- Copy documentation subdirectories --> <mkdir dir="${dist.home}/docs"/> <copy todir="${dist.home}/docs"> <fileset dir="${docs.home}"/> </copy> <!-- Create application JAR file --> <jar jarfile="${dist.home}/${app.name}-${app.version}.war" basedir="${build.home}"/> <!-- Copy additional files to ${dist.home} as necessary --> </target> <!-- ==================== Install Target ================================== --> <!-- The "install" target tells the specified Tomcat 6 installation to dynamically install this web application and make it available for execution. It does *not* cause the existence of this web application to be remembered across Tomcat restarts; if you restart the server, you will need to re-install all this web application. If you have already installed this application, and simply want Tomcat to recognize that you have updated Java classes (or the web.xml file), use the "reload" target instead. NOTE: This target will only succeed if it is run from the same server that Tomcat is running on. NOTE: This is the logical opposite of the "remove" target. --> <target name="install" depends="compile" description="Install application to servlet container"> <deploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}" localWar="file://${build.home}"/> </target> <!-- ==================== Javadoc Target ================================== --> <!-- The "javadoc" target creates Javadoc API documentation for the Java classes included in your application. Normally, this is only required when preparing a distribution release, but is available as a separate target in case the developer wants to create Javadocs independently. --> <target name="javadoc" depends="compile" description="Create Javadoc API documentation"> <mkdir dir="${dist.home}/docs/api"/> <javadoc sourcepath="${src.home}" destdir="${dist.home}/docs/api" packagenames="*"> <classpath refid="compile.classpath"/> </javadoc> </target> <!-- ====================== List Target =================================== --> <!-- The "list" target asks the specified Tomcat 6 installation to list the currently running web applications, either loaded at startup time or installed dynamically. It is useful to determine whether or not the application you are currently developing has been installed. --> <target name="list" description="List installed applications on servlet container"> <list url="${manager.url}" username="${manager.username}" password="${manager.password}"/> </target> <!-- ==================== Prepare Target ================================== --> <!-- The "prepare" target is used to create the "build" destination directory, and copy the static contents of your web application to it. If you need to copy static files from external dependencies, you can customize the contents of this task. Normally, this task is executed indirectly when needed. --> <target name="prepare"> <!-- Create build directories as needed --> <mkdir dir="${build.home}"/> <mkdir dir="${build.home}/WEB-INF"/> <mkdir dir="${build.home}/WEB-INF/classes"/> <!-- Copy static content of this web application --> <copy todir="${build.home}"> <fileset dir="${web.home}"/> </copy> <!-- Copy external dependencies as required --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <mkdir dir="${build.home}/WEB-INF/lib"/> <!-- <copy todir="${build.home}/WEB-INF/lib" file="${foo.jar}"/> --> <!-- Copy static files from external dependencies as needed --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> </target> <!-- ==================== Reload Target =================================== --> <!-- The "reload" signals the specified application Tomcat 6 to shut itself down and reload. This can be useful when the web application context is not reloadable and you have updated classes or property files in the /WEB-INF/classes directory or when you have added or updated jar files in the /WEB-INF/lib directory. NOTE: The /WEB-INF/web.xml web application configuration file is not reread on a reload. If you have made changes to your web.xml file you must stop then start the web application. --> <target name="reload" depends="compile" description="Reload application on servlet container"> <reload url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> <!-- ==================== Remove Target =================================== --> <!-- The "remove" target tells the specified Tomcat 6 installation to dynamically remove this web application from service. NOTE: This is the logical opposite of the "install" target. --> <target name="remove" description="Remove application on servlet container"> <undeploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> </project>

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by Emtucifor
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running as doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for the web server so it is "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

  • Library like ENet, but for TCP?

    - by Milo
    I'm not looking to use boost::asio, it is overly complex for my needs. I'm building a game that is cross platform, for desktop, iPhone and Android. I found a library called ENet which is pretty much what I need, but it uses UDP which does not seem to support encryption and a few other things. Given that the game is an event driven card game, TCP seems like the right fit. However, all I have found is WINSOCK / berkley sockets and bost::asio. Here is a sample client server application with ENet: #include <enet/enet.h> #include <stdlib.h> #include <string> #include <iostream> class Host { ENetAddress address; ENetHost * server; ENetHost* client; ENetEvent event; public: Host() :server(NULL) { enet_initialize(); setupServer(); } void setupServer() { if(server) { enet_host_destroy(server); server = NULL; } address.host = ENET_HOST_ANY; /* Bind the server to port 1234. */ address.port = 1721; server = enet_host_create (& address /* the address to bind the server host to */, 32 /* allow up to 32 clients and/or outgoing connections */, 2 /* allow up to 2 channels to be used, 0 and 1 */, 0 /* assume any amount of incoming bandwidth */, 0 /* assume any amount of outgoing bandwidth */); } void daLoop() { while(true) { /* Wait up to 1000 milliseconds for an event. */ while (enet_host_service (server, & event, 5000) > 0) { ENetPacket * packet; switch (event.type) { case ENET_EVENT_TYPE_CONNECT: printf ("A new client connected from %x:%u.\n", event.peer -> address.host, event.peer -> address.port); /* Store any relevant client information here. */ event.peer -> data = "Client information"; /* Create a reliable packet of size 7 containing "packet\0" */ packet = enet_packet_create ("packet", strlen ("packet") + 1, ENET_PACKET_FLAG_RELIABLE); /* Extend the packet so and append the string "foo", so it now */ /* contains "packetfoo\0" */ enet_packet_resize (packet, strlen ("packetfoo") + 1); strcpy ((char*)& packet -> data [strlen ("packet")], "foo"); /* Send the packet to the peer over channel id 0. */ /* One could also broadcast the packet by */ /* enet_host_broadcast (host, 0, packet); */ enet_peer_send (event.peer, 0, packet); /* One could just use enet_host_service() instead. */ enet_host_flush (server); break; case ENET_EVENT_TYPE_RECEIVE: printf ("A packet of length %u containing %s was received from %s on channel %u.\n", event.packet -> dataLength, event.packet -> data, event.peer -> data, event.channelID); /* Clean up the packet now that we're done using it. */ enet_packet_destroy (event.packet); break; case ENET_EVENT_TYPE_DISCONNECT: printf ("%s disconected.\n", event.peer -> data); /* Reset the peer's client information. */ event.peer -> data = NULL; } } } } ~Host() { if(server) { enet_host_destroy(server); server = NULL; } atexit (enet_deinitialize); } }; class Client { ENetAddress address; ENetEvent event; ENetPeer *peer; ENetHost* client; public: Client() :peer(NULL) { enet_initialize(); setupPeer(); } void setupPeer() { client = enet_host_create (NULL /* create a client host */, 1 /* only allow 1 outgoing connection */, 2 /* allow up 2 channels to be used, 0 and 1 */, 57600 / 8 /* 56K modem with 56 Kbps downstream bandwidth */, 14400 / 8 /* 56K modem with 14 Kbps upstream bandwidth */); if (client == NULL) { fprintf (stderr, "An error occurred while trying to create an ENet client host.\n"); exit (EXIT_FAILURE); } /* Connect to some.server.net:1234. */ enet_address_set_host (& address, "192.168.2.13"); address.port = 1721; /* Initiate the connection, allocating the two channels 0 and 1. */ peer = enet_host_connect (client, & address, 2, 0); if (peer == NULL) { fprintf (stderr, "No available peers for initiating an ENet connection.\n"); exit (EXIT_FAILURE); } /* Wait up to 5 seconds for the connection attempt to succeed. */ if (enet_host_service (client, & event, 20000) > 0 && event.type == ENET_EVENT_TYPE_CONNECT) { std::cout << "Connection to some.server.net:1234 succeeded." << std::endl; } else { /* Either the 5 seconds are up or a disconnect event was */ /* received. Reset the peer in the event the 5 seconds */ /* had run out without any significant event. */ enet_peer_reset (peer); puts ("Connection to some.server.net:1234 failed."); } } void daLoop() { ENetPacket* packet; /* Create a reliable packet of size 7 containing "packet\0" */ packet = enet_packet_create ("backet", strlen ("backet") + 1, ENET_PACKET_FLAG_RELIABLE); /* Extend the packet so and append the string "foo", so it now */ /* contains "packetfoo\0" */ enet_packet_resize (packet, strlen ("backetfoo") + 1); strcpy ((char*)& packet -> data [strlen ("backet")], "foo"); /* Send the packet to the peer over channel id 0. */ /* One could also broadcast the packet by */ /* enet_host_broadcast (host, 0, packet); */ enet_peer_send (event.peer, 0, packet); /* One could just use enet_host_service() instead. */ enet_host_flush (client); while(true) { /* Wait up to 1000 milliseconds for an event. */ while (enet_host_service (client, & event, 1000) > 0) { ENetPacket * packet; switch (event.type) { case ENET_EVENT_TYPE_RECEIVE: printf ("A packet of length %u containing %s was received from %s on channel %u.\n", event.packet -> dataLength, event.packet -> data, event.peer -> data, event.channelID); /* Clean up the packet now that we're done using it. */ enet_packet_destroy (event.packet); break; } } } } ~Client() { atexit (enet_deinitialize); } }; int main() { std::string a; std::cin >> a; if(a == "host") { Host host; host.daLoop(); } else { Client c; c.daLoop(); } return 0; } I looked at some socket tutorials and they seemed a bit too low level. I just need something that abstracts away the platform (eg, no WINSOCKS) and that has basic ability to keep track of connected clients and send them messages. Thanks

    Read the article

  • xml file save/read error (making a highscore system for XNA game)

    - by Eddy
    i get an error after i write player name to the file for second or third time (An unhandled exception of type 'System.InvalidOperationException' occurred in System.Xml.dll Additional information: There is an error in XML document (18, 17).) (in highscores load method In data = (HighScoreData)serializer.Deserialize(stream); it stops) the problem is that some how it adds additional "" at the end of my .dat file could anyone tell me how to fix this? the file before save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>neil</string> <string>shawn</string> <string>mark</string> <string>cindy</string> <string>sam</string> </PlayerName> <Score> <int>200</int> <int>180</int> <int>150</int> <int>100</int> <int>50</int> </Score> <Count>5</Count> </HighScoreData> the file after save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>Nick</string> <string>Nick</string> <string>neil</string> <string>shawn</string> <string>mark</string> </PlayerName> <Score> <int>210</int> <int>210</int> <int>200</int> <int>180</int> <int>150</int> </Score> <Count>5</Count> </HighScoreData>> the part of my code that does all of save load to xml is: DECLARATIONS PART [Serializable] public struct HighScoreData { public string[] PlayerName; public int[] Score; public int Count; public HighScoreData(int count) { PlayerName = new string[count]; Score = new int[count]; Count = count; } } IAsyncResult result = null; bool inputName; HighScoreData data; int Score = 0; public string NAME; public string HighScoresFilename = "highscores.dat"; Game1 constructor public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; Width = graphics.PreferredBackBufferWidth = 960; Height = graphics.PreferredBackBufferHeight =640; GamerServicesComponent GSC = new GamerServicesComponent(this); Components.Add(GSC); } Inicialize function (end of it) protected override void Initialize() { //other game code base.Initialize(); string fullpath =Path.Combine(HighScoresFilename); if (!File.Exists(fullpath)) { //If the file doesn't exist, make a fake one... // Create the data to save data = new HighScoreData(5); data.PlayerName[0] = "neil"; data.Score[0] = 200; data.PlayerName[1] = "shawn"; data.Score[1] = 180; data.PlayerName[2] = "mark"; data.Score[2] = 150; data.PlayerName[3] = "cindy"; data.Score[3] = 100; data.PlayerName[4] = "sam"; data.Score[4] = 50; SaveHighScores(data, HighScoresFilename); } } all methods for loading saving and output public static void SaveHighScores(HighScoreData data, string filename) { // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file, creating it if necessary FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate); try { // Convert the object to XML data and put it in the stream XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); serializer.Serialize(stream, data); } finally { // Close the file stream.Close(); } } /* Load highscores */ public static HighScoreData LoadHighScores(string filename) { HighScoreData data; // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream);//this is the line // where program gives an error } finally { // Close the file stream.Close(); } return (data); } /* Save player highscore when game ends */ private void SaveHighScore() { // Create the data to saved HighScoreData data = LoadHighScores(HighScoresFilename); int scoreIndex = -1; for (int i = 0; i < data.Count ; i++) { if (Score > data.Score[i]) { scoreIndex = i; break; } } if (scoreIndex > -1) { //New high score found ... do swaps for (int i = data.Count - 1; i > scoreIndex; i--) { data.PlayerName[i] = data.PlayerName[i - 1]; data.Score[i] = data.Score[i - 1]; } data.PlayerName[scoreIndex] = NAME; //Retrieve User Name Here data.Score[scoreIndex] = Score; // Retrieve score here SaveHighScores(data, HighScoresFilename); } } /* Iterate through data if highscore is called and make the string to be saved*/ public string makeHighScoreString() { // Create the data to save HighScoreData data2 = LoadHighScores(HighScoresFilename); // Create scoreBoardString string scoreBoardString = "Highscores:\n\n"; for (int i = 0; i<5;i++) { scoreBoardString = scoreBoardString + data2.PlayerName[i] + "-" + data2.Score[i] + "\n"; } return scoreBoardString; } when ill make this work i will start this code when i call game over (now i start it when i press some buttons, so i could test it faster) public void InputYourName() { if (result == null && !Guide.IsVisible) { string title = "Name"; string description = "Write your name in order to save your Score"; string defaultText = "Nick"; PlayerIndex playerIndex = new PlayerIndex(); result= Guide.BeginShowKeyboardInput(playerIndex, title, description, defaultText, null, null); // NAME = result.ToString(); } if (result != null && result.IsCompleted) { NAME = Guide.EndShowKeyboardInput(result); result = null; inputName = false; SaveHighScore(); } } this where i call output to the screen (ill call this in highscores meniu section when i am done with debugging) spriteBatch.DrawString(Font1, "" + makeHighScoreString(),new Vector2(500,200), Color.White); }

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • DirectX 10 Primitive is not displayed

    - by pypmannetjies
    I am trying to write my first DirectX 10 program that displays a triangle. Everything compiles fine, and the render function is called, since the background changes to black. However, the triangle I'm trying to draw with a triangle strip primitive is not displayed at all. The Initialization function: bool InitDirect3D(HWND hWnd, int width, int height) { //****** D3DDevice and SwapChain *****// DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(swapChainDesc)); swapChainDesc.BufferCount = 1; swapChainDesc.BufferDesc.Width = width; swapChainDesc.BufferDesc.Height = height; swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swapChainDesc.BufferDesc.RefreshRate.Numerator = 60; swapChainDesc.BufferDesc.RefreshRate.Denominator = 1; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.OutputWindow = hWnd; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.Windowed = TRUE; if (FAILED(D3D10CreateDeviceAndSwapChain( NULL, D3D10_DRIVER_TYPE_HARDWARE, NULL, 0, D3D10_SDK_VERSION, &swapChainDesc, &pSwapChain, &pD3DDevice))) return fatalError(TEXT("Hardware does not support DirectX 10!")); //***** Shader *****// if (FAILED(D3DX10CreateEffectFromFile( TEXT("basicEffect.fx"), NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, pD3DDevice, NULL, NULL, &pBasicEffect, NULL, NULL))) return fatalError(TEXT("Could not load effect file!")); pBasicTechnique = pBasicEffect->GetTechniqueByName("Render"); pViewMatrixEffectVariable = pBasicEffect->GetVariableByName( "View" )->AsMatrix(); pProjectionMatrixEffectVariable = pBasicEffect->GetVariableByName( "Projection" )->AsMatrix(); pWorldMatrixEffectVariable = pBasicEffect->GetVariableByName( "World" )->AsMatrix(); //***** Input Assembly Stage *****// D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = 2; D3D10_PASS_DESC PassDesc; pBasicTechnique->GetPassByIndex(0)->GetDesc(&PassDesc); if (FAILED( pD3DDevice->CreateInputLayout( layout, numElements, PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &pVertexLayout))) return fatalError(TEXT("Could not create Input Layout.")); pD3DDevice->IASetInputLayout( pVertexLayout ); //***** Vertex buffer *****// UINT numVertices = 100; D3D10_BUFFER_DESC bd; bd.Usage = D3D10_USAGE_DYNAMIC; bd.ByteWidth = sizeof(vertex) * numVertices; bd.BindFlags = D3D10_BIND_VERTEX_BUFFER; bd.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE; bd.MiscFlags = 0; if (FAILED(pD3DDevice->CreateBuffer(&bd, NULL, &pVertexBuffer))) return fatalError(TEXT("Could not create vertex buffer!"));; UINT stride = sizeof(vertex); UINT offset = 0; pD3DDevice->IASetVertexBuffers( 0, 1, &pVertexBuffer, &stride, &offset ); //***** Rasterizer *****// // Set the viewport viewPort.Width = width; viewPort.Height = height; viewPort.MinDepth = 0.0f; viewPort.MaxDepth = 1.0f; viewPort.TopLeftX = 0; viewPort.TopLeftY = 0; pD3DDevice->RSSetViewports(1, &viewPort); D3D10_RASTERIZER_DESC rasterizerState; rasterizerState.CullMode = D3D10_CULL_NONE; rasterizerState.FillMode = D3D10_FILL_SOLID; rasterizerState.FrontCounterClockwise = true; rasterizerState.DepthBias = false; rasterizerState.DepthBiasClamp = 0; rasterizerState.SlopeScaledDepthBias = 0; rasterizerState.DepthClipEnable = true; rasterizerState.ScissorEnable = false; rasterizerState.MultisampleEnable = false; rasterizerState.AntialiasedLineEnable = true; ID3D10RasterizerState* pRS; pD3DDevice->CreateRasterizerState(&rasterizerState, &pRS); pD3DDevice->RSSetState(pRS); //***** Output Merger *****// // Get the back buffer from the swapchain ID3D10Texture2D *pBackBuffer; if (FAILED(pSwapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*)&pBackBuffer))) return fatalError(TEXT("Could not get back buffer.")); // create the render target view if (FAILED(pD3DDevice->CreateRenderTargetView(pBackBuffer, NULL, &pRenderTargetView))) return fatalError(TEXT("Could not create the render target view.")); // release the back buffer pBackBuffer->Release(); // set the render target pD3DDevice->OMSetRenderTargets(1, &pRenderTargetView, NULL); return true; } The render function: void Render() { if (pD3DDevice != NULL) { pD3DDevice->ClearRenderTargetView(pRenderTargetView, D3DXCOLOR(0.0f, 0.0f, 0.0f, 0.0f)); //create world matrix static float r; D3DXMATRIX w; D3DXMatrixIdentity(&w); D3DXMatrixRotationY(&w, r); r += 0.001f; //set effect matrices pWorldMatrixEffectVariable->SetMatrix(w); pViewMatrixEffectVariable->SetMatrix(viewMatrix); pProjectionMatrixEffectVariable->SetMatrix(projectionMatrix); //fill vertex buffer with vertices UINT numVertices = 3; vertex* v = NULL; //lock vertex buffer for CPU use pVertexBuffer->Map(D3D10_MAP_WRITE_DISCARD, 0, (void**) &v ); v[0] = vertex( D3DXVECTOR3(-1,-1,0), D3DXVECTOR4(1,0,0,1) ); v[1] = vertex( D3DXVECTOR3(0,1,0), D3DXVECTOR4(0,1,0,1) ); v[2] = vertex( D3DXVECTOR3(1,-1,0), D3DXVECTOR4(0,0,1,1) ); pVertexBuffer->Unmap(); // Set primitive topology pD3DDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP ); //get technique desc D3D10_TECHNIQUE_DESC techDesc; pBasicTechnique->GetDesc(&techDesc); for(UINT p = 0; p < techDesc.Passes; ++p) { //apply technique pBasicTechnique->GetPassByIndex(p)->Apply(0); //draw pD3DDevice->Draw(numVertices, 0); } pSwapChain->Present(0,0); } }

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • Grow Your Business with Security

    - by Darin Pendergraft
    Author: Kevin Moulton Kevin Moulton has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East EnterpriseSecurity Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. It happened again! There I was, reading something interesting online, and realizing that a friend might find it interesting too. I clicked on the little email link, thinking that I could easily forward this to my friend, but no! Instead, a new screen popped up where I was asked to create an account. I was expected to create a User ID and password, not to mention providing some personally identifiable information, just for the privilege of helping that website spread their word. Of course, I didn’t want to have to remember a new account and password, I didn’t want to provide the requisite information, and I didn’t want to waste my time. I gave up, closed the web page, and moved on to something else. I was left with a bad taste in my mouth, and my friend might never find her way to this interesting website. If you were this content provider, would this be the outcome you were looking for? A few days later, I had a similar experience, but this one went a little differently. I was surfing the web, when I happened upon some little chotcke that I just had to have. I added it to my cart. When I went to buy the item, I was again brought to a page to create account. Groan! But wait! On this page, I also had the option to sign in with my OpenID account, my Facebook account, my Yahoo account, or my Google Account. I have all of those! No new account to create, no new password to remember, and no personally identifiable information to be given to someone else (I’ve already given it all to those other guys, after all). In this case, the vendor was easy to deal with, and I happily completed the transaction. That pleasant experience will bring me back again. This is where security can grow your business. It’s a differentiator. You’ve got to have a presence on the web, and that presence has to take into account all the smart phones everyone’s carrying, and the tablets that took over cyber Monday this year. If you are a company that a customer can deal with securely, and do so easily, then you are a company customers will come back to again and again. I recently had a need to open a new bank account. Every bank has a web presence now, but they are certainly not all the same. I wanted one that I could deal with easily using my laptop, but I also wanted 2-factor authentication in case I had to login from a shared machine, and I wanted an app for my iPad. I found a bank with all three, and that’s who I am doing business with. Let’s say, for example, that I’m in a regular Texas Hold-em game on Friday nights, so I move a couple of hundred bucks from checking to savings on Friday afternoons. I move a similar amount each week and I do it from the same machine. The bank trusts me, and they trust my machine. Most importantly, they trust my behavior. This is adaptive authentication. There should be no reason for my bank to make this transaction difficult for me. Now let's say that I login from a Starbucks in Uzbekistan, and I transfer $2,500. What should my bank do now? Should they stop the transaction? Should they call my home number? (My former bank did exactly this once when I was taking money out of an ATM on a business trip, when I had provided my cell phone number as my primary contact. When I asked them why they called my home number rather than my cell, they told me that their “policy” is to call the home number. If I'm on the road, what exactly is the use of trying to reach me at home to verify my transaction?) But, back to Uzbekistan… Should my bank assume that I am happily at home in New Jersey, and someone is trying to hack into my account? Perhaps they think they are protecting me, but I wouldn’t be very happy if I happened to be traveling on business in Central Asia. What if my bank were to automatically analyze my behavior and calculate a risk score? Clearly, this scenario would be outside of my typical behavior, so my risk score would necessitate something more than a simple login and password. Perhaps, in this case, a one-time password to my cell phone would prove that this is not just some hacker half way around the world. But, what if you're not a bank? Do you need this level of security? If you want to be a business that is easy to deal with while also protecting your customers, then of course you do. You want your customers to trust you, but you also want them to enjoy doing business with you. Make it easy for them to do business with you, and they’ll come back, and perhaps even Tweet about it, or Like you, and then their friends will follow. How can Oracle help? Oracle has the technology and expertise to help you to grown your business with security. Oracle Adaptive Access Manager will help you to prevent fraud while making it easier for your customers to do business with you by providing the risk analysis I discussed above, step-up authentication, and much more. Oracle Mobile and Social Access Service will help you to secure mobile access to applications by expanding on your existing back-end identity management infrastructure, and allowing your customers to transact business with you using the social media accounts they already know. You also have device fingerprinting and metrics to help you to grow your business securely. Security is not just a cost anymore. It’s a way to set your business apart. With Oracle’s help, you can be the business that everyone’s tweeting about. Image courtesy of Flickr user shareski

    Read the article

  • How-to delete a tree node using the context menu

    - by frank.nimphius
    Hierarchical trees in Oracle ADF make use of View Accessors, which means that only the top level node needs to be exposed as a View Object instance on the ADF Business Components Data Model. This also means that only the top level node has a representation in the PageDef file as a tree binding and iterator binding reference. Detail nodes are accessed through tree rule definitions that use the accessor mentioned above (or nested collections in the case of POJO or EJB business services). The tree component is configured for single node selection, which however can be declaratively changed for users to press the ctrl key and selecting multiple nodes. In the following, I explain how to create a context menu on the tree for users to delete the selected tree nodes. For this, the context menu item will access a managed bean, which then determines the selected node(s), the internal ADF node bindings and the rows they represent. As mentioned, the ADF Business Components Data Model only needs to expose the top level node data sources, which in this example is an instance of the Locations View Object. For the tree to work, you need to have associations defined between entities, which usually is done for you by Oracle JDeveloper if the database tables have foreign keys defined Note: As a general hint of best practices and to simplify your life: Make sure your database schema is well defined and designed before starting your development project. Don't treat the database as something organic that grows and changes with the requirements as you proceed in your project. Business service refactoring in response to database changes is possible, but should be treated as an exception, not the rule. Good database design is a necessity – even for application developers – and nothing evil. To create the tree component, expand the Data Controls panel and drag the View Object collection to the view. From the context menu, select the tree component entry and continue with defining the tree rules that make up the hierarchical structure. As you see, when pressing the green plus icon  in the Edit Tree Binding  dialog, the data structure, Locations -  Departments – Employees in my sample, shows without you having created a View Object instance for each of the nodes in the ADF Business Components Data Model. After you configured the tree structure in the Edit Tree Binding dialog, you press OK and the tree is created. Select the tree in the page editor and open the Structure Window (ctrl+shift+S). In the Structure window, expand the tree node to access the conextMenu facet. Use the right mouse button to insert a Popup  into the facet. Repeat the same steps to insert a Menu and a Menu Item into the Popup you created. The Menu item text should be changed to something meaningful like "Delete". Note that the custom menu item later is added to the context menu together with the default context menu options like expand and expand all. To define the action that is executed when the menu item is clicked on, you select the Action Listener property in the Property Inspector and click the arrow icon followed by the Edit menu option. Create or select a managed bean and define a method name for the action handler. Next, select the tree component and browse to its binding property in the Property Inspector. Again, use the arrow icon | Edit option to create a component binding in the same managed bean that has the action listener defined. The tree handle is used in the action listener code, which is shown below: public void onTreeNodeDelete(ActionEvent actionEvent) {   //access the tree from the JSF component reference created   //using the af:tree "binding" property. The "binding" property   //creates a pair of set/get methods to access the RichTree instance   RichTree tree = this.getTreeHandler();   //get the list of selected row keys   RowKeySet rks = tree.getSelectedRowKeys();   //access the iterator to loop over selected nodes   Iterator rksIterator = rks.iterator();          //The CollectionModel represents the tree model and is   //accessed from the tree "value" property   CollectionModel model = (CollectionModel) tree.getValue();   //The CollectionModel is a wrapper for the ADF tree binding   //class, which is JUCtrlHierBinding   JUCtrlHierBinding treeBinding =                  (JUCtrlHierBinding) model.getWrappedData();          //loop over the selected nodes and delete the rows they   //represent   while(rksIterator.hasNext()){     List nodeKey = (List) rksIterator.next();     //find the ADF node binding using the node key     JUCtrlHierNodeBinding node =                       treeBinding.findNodeByKeyPath(nodeKey);     //delete the row.     Row rw = node.getRow();       rw.remove();   }          //only refresh the tree if tree nodes have been selected   if(rks.size() > 0){     AdfFacesContext adfFacesContext =                          AdfFacesContext.getCurrentInstance();     adfFacesContext.addPartialTarget(tree);   } } Note: To enable multi node selection for a tree, select the tree and change the row selection setting from "single" to "multiple". Note: a fully pictured version of this post will become available at the end of the month in a PDF summary on ADF Code Corner : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html 

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Set Context User Principal for Customized Authentication in SignalR

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/27/set-context-user-principal-for-customized-authentication-in-signalr.aspxCurrently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR. SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1. But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.   In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.   Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked. 1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute 2: { 3: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 4: { 5: } 6:  7: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 8: { 9: } 10: } The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims. 1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 2: { 3: var dataProtectionProvider = new DpapiDataProtectionProvider(); 4: var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create()); 5: // authenticate by using bearer token in query string 6: var token = request.QueryString.Get(WebApiConfig.AuthenticationType); 7: var ticket = secureDataFormat.Unprotect(token); 8: if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated) 9: { 10: // set the authenticated user principal into environment so that it can be used in the future 11: request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future. Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string. This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser. Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked. 1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 2: { 3: var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId; 4: // check the authenticated user principal from environment 5: var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment; 6: var principal = environment["server.User"] as ClaimsPrincipal; 7: if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) 8: { 9: // create a new HubCallerContext instance with the principal generated from token 10: // and replace the current context so that in hubs we can retrieve current user identity 11: hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below. 1: public class DefaultHub : Hub 2: { 3: public object Initialize(string host, string service, JObject payload) 4: { 5: var connectionId = Context.ConnectionId; 6: ... ... 7: var domain = string.Empty; 8: var identity = Context.User.Identity as ClaimsIdentity; 9: if (identity != null) 10: { 11: var claim = identity.FindFirst("Domain"); 12: if (claim != null) 13: { 14: domain = claim.Value; 15: } 16: } 17: ... ... 18: } 19: } Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline. 1: app.Map("/signalr", map => 2: { 3: // Setup the CORS middleware to run before SignalR. 4: // By default this will allow all origins. You can 5: // configure the set of origins and/or http verbs by 6: // providing a cors options with a different policy. 7: map.UseCors(CorsOptions.AllowAll); 8: var hubConfiguration = new HubConfiguration 9: { 10: // You can enable JSONP by uncommenting line below. 11: // JSONP requests are insecure but some older browsers (and some 12: // versions of IE) require JSONP to work cross domain 13: // EnableJSONP = true 14: EnableJavaScriptProxies = false 15: }; 16: // Require authentication for all hubs 17: var authorizer = new QueryStringBearerAuthorizeAttribute(); 18: var module = new AuthorizeModule(authorizer, authorizer); 19: GlobalHost.HubPipeline.AddModule(module); 20: // Run the SignalR pipeline. We're not using MapSignalR 21: // since this branch already runs under the "/signalr" path. 22: map.RunSignalR(hubConfiguration); 23: }); On the client side should pass the Bearer token through query string before I started the connection as below. 1: self.connection = $.hubConnection(signalrEndpoint); 2: self.proxy = self.connection.createHubProxy(hubName); 3: self.proxy.on(notifyEventName, function (event, payload) { 4: options.handler(event, payload); 5: }); 6: // add the authentication token to query string 7: // we cannot use http headers since web socket protocol doesn't support 8: self.connection.qs = { Bearer: AuthService.getToken() }; 9: // connection to hub 10: self.connection.start(); Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Configuring Oracle HTTP Server 12c for WebLogic Server Domain

    - by Emin Askerov
    Oracle HTTP Server (OHS) 12c 12.1.2 which was released in July 2013 as a part of Oracle Web Tier 12c is the web server component of Oracle Fusion Middleware. In essence this is Apache HTTP Server 2.2.22 (with critical bug fixes from higher versions) which includes modules developed specifically by Oracle. It provides a listener functionality for Oracle WebLogic Server and the framework for hosting static pages, dynamic pages, and applications over the Web. OHS can be easily managed by Weblogic Management Framework, a set of tools which provides administrative capabilities (start, stop, lifecycle operations, etc.) for Oracle Fusion Middleware products. In other words all tools which are familiar to us (Node Manager, WLST, Administration Console, Fusion Middleware Control etc.) presented as a part of Weblogic Management Framework and using for managing Java and System Components both for Weblogic Server and Standalone Domain types. You can familiarize yourself with these terms using related documentation: 1. Introduction to Oracle HTTP Server: http://docs.oracle.com/middleware/1212/webtier/index.html 2. Weblogic Management Framework: http://docs.oracle.com/middleware/1212/core/ASCON/terminology.htm#ASCON11260 In the given post I would like to cover rather simple use case how to configure OHS as web proxy in Weblogic Cluster environment. For example, we have existing Weblogic Domain where some managed servers have been joined to cluster and host business applications. We need to configure web proxy component which will act as entry point, load balancer for our cluster for user requests. Of course, we could install old good Apache HTTP Server and configure mod_wl plugin. However this solution not optimal from manageability perspective: we need to install Apache, install additional plugin then configure it by editing configuration file which is not really convenient for FMW Administrators and often increase time of performing of simple administrative task. Alternatively, we could use OHS as System Component within Weblogic Domain and use full power of Weblogic Management Framework in order to configure, manage and monitor it! I like this idea! What about you? I hope after reading this post you will agree with me. First of all it is necessary to download OHS binaries. You can use this link for downloading: http://www.oracle.com/technetwork/java/webtier/downloads/index2-303202.html As we will use Fusion Middleware Control for managing OHS instances it is necessary to extend your domain with Enterprise Manager and Oracle ADF and JRF templates. This is not topic for focusing in this post, but you could get more information from documentation or one of my previous posts: http://docs.oracle.com/middleware/1212/wls/WLDTR/fmw_templates.htm#sthref64 https://blogs.oracle.com/imc/entry/the_specifics_of_adf_12c Note: you should have properly configured Node Manager utility for managing OHS instances Let’s consider configuration process step by step: 1. Shut down all Weblogic instances of existing domain including Admin Server; 2. Install Oracle HTTP Server. You should use your Fusion Middleware Home Path (e.g. /u01/Oracle/FMW12) for Installation Location and select Colocated HTTP Server option as Installation Type. I will not focus on this topic in this post. All information related to OHS installation you could find here: http://docs.oracle.com/middleware/1212/webtier/WTINS/install_gui.htm#i1082009 3. Next we need to extend our existing domain with OHS component. In order to do this you should do the following: a. Run Fusion Middleware Configuration Wizard (ORACLE_HOME/oracle_common/common/bin/config.sh); b. On the step 1 select Update an existing domain option and point your Fusion Middleware Home Path; c. On the step 2 check Oracle HTTP Server, Oracle Enterprise Manager Plugin for WEBTIER templates; d. Go through other steps without any changes and finish configuration process. 4. Start Admin Server and all managed servers related to your cluster 5. Log in to Enterprise Manager FMW Control using http://<hostname>:<port>/em URL 6. Now we will create OHS instance within our Weblogic Domain Infrastructure. Navigate to Weblogic Domain -> Administration -> Create/Delete OHS menu item; 7. Enter to edit mode, clicking Changes -> Lock&Edit menu item; 8. Create new OHS instance clicking Create button; 9. Define Instance Name (e.g. DevOSH) and Machine parameters; 10. Now we need to define listen port. By default OHS will use 7777 port number for income HTTP requests. We could change it to any free port number we would like to use. In order to do it, right click on our created OHS instance (left hand panel) and navigate to Administration -> Port Configuration; 11. Click on record with port number 7777 and then click Edit button; 12. Change port number value (in our case this will be 8080) and then click OK button; 13. Now we need to edit mod_wl_ohs configuration in order to enable OHS to act as proxy for WebLogic Server Instances/Cluster; 14. In order to do it right click on our created OHS instance (left panel) and navigate to Administration -> mod_wl_ohs Configuration; a. In Weblogic Cluster you should enter cluster address (define <host:port> for all managed servers which participated in cluster), e.g: 192.168.56.2:7004,192.168.56.2:7005 b. Define Weblogic Port parameter at which the Oracle WebLogic Server host is listening for connection requests from the module (or from other servers); c. Check Dynamic Server List option. This will dynamically update cluster list for every request; d. In the Location table define list of endpoint locations which you would like to process. In order to do this click Add Row button and define Location, Weblogic Cluster, Path Trim and Path Prefix parameters (if required); e. Click Apply button in order to save changes. 15. Activate changes clicking Changes ? Activate Changes menu item; 16. Finally we will start configured OHS instance. Right click on OHS instance tree item under Web Tier folder, select Control -> Start Up menu item; 17. Ensure that OHS instance up and running and then test your environment. Run deployed application to your Weblogic Cluster accessing via OHS web proxy; Normal 0 false false false RU X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • Could not load type 'Default.DataMatch' in DataMatch.aspx file

    - by salvationishere
    I am developing a C# VS 2008 / SQL Server 2008 website, but now I am getting the above error when I build it. I included the Default.aspx, Default.aspx.cs, DataMatch.aspx, and DataMatch.aspx.cs files below. What do I need to do to fix this? Default.aspx: <%@ Page Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Title="Untitled Page" %> ... DataMatch.aspx: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="DataMatch.aspx.cs" Inherits="_Default.DataMatch" %> ... Default.aspx.cs: using System; using System.Collections; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; using System.Collections.Generic; using System.IO; using System.Drawing; using System.ComponentModel; using System.Data.SqlClient; using ADONET_namespace; using System.Security.Principal; //using System.Windows; public partial class _Default : System.Web.UI.Page //namespace AddFileToSQL { //protected System.Web.UI.HtmlControls.HtmlInputFile uploadFile; //protected System.Web.UI.HtmlControls.HtmlInputButton btnOWrite; //protected System.Web.UI.HtmlControls.HtmlInputButton btnAppend; protected System.Web.UI.WebControls.Label Label1; protected static string inputfile = ""; public static string targettable; public static string selection; // Number of controls added to view state protected int default_NumberOfControls { get { if (ViewState["default_NumberOfControls"] != null) { return (int)ViewState["default_NumberOfControls"]; } else { return 0; } } set { ViewState["default_NumberOfControls"] = value; } } protected void uploadFile_onclick(object sender, EventArgs e) { } protected void Load_GridData() { //GridView1.DataSource = ADONET_methods.DisplaySchemaTables(); //GridView1.DataBind(); } protected void btnOWrite_Click(object sender, EventArgs e) { if (uploadFile.PostedFile.ContentLength > 0) { feedbackLabel.Text = "You do not have sufficient access to overwrite table records."; } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void btnAppend_Click(object sender, EventArgs e) { string fullpath = Page.Request.PhysicalApplicationPath; string path = uploadFile.PostedFile.FileName; if (File.Exists(path)) { // Create a file to write to. try { StreamReader sr = new StreamReader(path); string s = ""; while (sr.Peek() > 0) s = sr.ReadLine(); sr.Close(); } catch (IOException exc) { Console.WriteLine(exc.Message + "Cannot open file."); return; } } if (uploadFile.PostedFile.ContentLength > 0) { inputfile = System.IO.File.ReadAllText(path); Session["Message"] = inputfile; Response.Redirect("DataMatch.aspx"); } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void Page_Load(object sender, EventArgs e) { if (Request.IsAuthenticated) { WelcomeBackMessage.Text = "Welcome back, " + User.Identity.Name + "!"; // Reference the CustomPrincipal / CustomIdentity CustomIdentity ident = User.Identity as CustomIdentity; if (ident != null) WelcomeBackMessage.Text += string.Format(" You are the {0} of {1}.", ident.Title, ident.CompanyName); AuthenticatedMessagePanel.Visible = true; AnonymousMessagePanel.Visible = false; if (!Page.IsPostBack) { Load_GridData(); } } else { AuthenticatedMessagePanel.Visible = false; AnonymousMessagePanel.Visible = true; } } protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { GridViewRow row = GridView1.SelectedRow; targettable = row.Cells[2].Text; } } DataMatch.aspx.cs: using System; using System.Collections; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Diagnostics; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using ADONET_namespace; //using MatrixApp; //namespace AddFileToSQL //{ public partial class DataMatch : AddFileToSQL._Default { protected System.Web.UI.WebControls.PlaceHolder phTextBoxes; protected System.Web.UI.WebControls.PlaceHolder phDropDownLists; protected System.Web.UI.WebControls.Button btnAnotherRequest; protected System.Web.UI.WebControls.Panel pnlCreateData; protected System.Web.UI.WebControls.Literal lTextData; protected System.Web.UI.WebControls.Panel pnlDisplayData; protected static string inputfile2; static string[] headers = null; static string[] data = null; static string[] data2 = null; static DataTable myInputFile = new DataTable("MyInputFile"); static string[] myUserSelections; static bool restart = false; private DropDownList[] newcol; int @temp = 0; string @tempS = ""; string @tempT = ""; // a Property that manages a counter stored in ViewState protected int NumberOfControls { get { return (int)ViewState["NumControls"]; } set { ViewState["NumControls"] = value; } } private Hashtable ddl_ht { get { return (Hashtable)ViewState["ddl_ht"]; } set { ViewState["ddl_ht"] = value; } } // Page Load private void Page_Load(object sender, System.EventArgs e) { if (!Page.IsPostBack) { ddl_ht = new Hashtable(); this.NumberOfControls = 0; } } // This data comes from input file private void PopulateFileInputTable() { myInputFile.Columns.Clear(); string strInput, newrow; string[] oneRow; DataColumn myDataColumn; DataRow myDataRow; int result, numRows; //Read the input file strInput = Session["Message"].ToString(); data = strInput.Split('\r'); //Headers headers = data[0].Split('|'); //Data for (int i = 0; i < data.Length; i++) { newrow = data[i].TrimStart('\n'); data[i] = newrow; } result = String.Compare(data[data.Length - 1], ""); numRows = data.Length; if (result == 0) { numRows = numRows - 1; } data2 = new string[numRows]; for (int a = 0, b = 0; a < numRows; a++, b++) { data2[b] = data[a]; } // Create columns for (int col = 0; col < headers.Length; col++) { @temp = (col + 1); @tempS = @temp.ToString(); @tempT = "@col"+ @temp.ToString(); myDataColumn = new DataColumn(); myDataColumn.DataType = Type.GetType("System.String"); myDataColumn.ColumnName = headers[col]; myInputFile.Columns.Add(myDataColumn); ddl_ht.Add(@tempT, headers[col]); } // Create new DataRow objects and add to DataTable. for (int r = 0; r < numRows - 1; r++) { oneRow = data2[r + 1].Split('|'); myDataRow = myInputFile.NewRow(); for (int c = 0; c < headers.Length; c++) { myDataRow[c] = oneRow[c]; } myInputFile.Rows.Add(myDataRow); } NumberOfControls = headers.Length; myUserSelections = new string[NumberOfControls]; } //Create display panel private void CreateDisplayPanel() { btnSubmit.Style.Add("top", "auto"); btnSubmit.Style.Add("left", "auto"); btnSubmit.Style.Add("position", "absolute"); btnSubmit.Style.Add("top", "200px"); btnSubmit.Style.Add("left", "400px"); newcol = CreateDropDownLists(); for (int counter = 0; counter < NumberOfControls; counter++) { pnlDisplayData.Controls.Add(newcol[counter]); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); pnlDisplayData.Visible = true; pnlDisplayData.FindControl(newcol[counter].ID); } } //Recreate display panel private void RecreateDisplayPanel() { btnSubmit.Style.Add("top", "auto"); btnSubmit.Style.Add("left", "auto"); btnSubmit.Style.Add("position", "absolute"); btnSubmit.Style.Add("top", "200px"); btnSubmit.Style.Add("left", "400px"); newcol = RecreateDropDownLists(); for (int counter = 0; counter < NumberOfControls; counter++) { pnlDisplayData.Controls.Add(newcol[counter]); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); pnlDisplayData.Visible = true; pnlDisplayData.FindControl(newcol[counter].ID); } } // Add DropDownList Control to Placeholder private DropDownList[] CreateDropDownLists() { DropDownList[] dropDowns = new DropDownList[NumberOfControls]; for (int counter = 0; counter < NumberOfControls; counter++) { DropDownList ddl = new DropDownList(); SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); ddl.ID = "DropDownListID" + counter.ToString(); int NumControls = targettable.Length; DataTable dt = new DataTable(); dt.Load(dr2); ddl.DataValueField = "COLUMN_NAME"; ddl.DataTextField = "COLUMN_NAME"; ddl.DataSource = dt; ddl.SelectedIndexChanged += new EventHandler(ddlList_SelectedIndexChanged); ddl.DataBind(); ddl.AutoPostBack = true; ddl.EnableViewState = true; //Preserves View State info on Postbacks dr2.Close(); ddl.Items.Add("IGNORE"); dropDowns[counter] = ddl; } return dropDowns; } protected void ddlList_SelectedIndexChanged(object sender, EventArgs e) { DropDownList ddl = (DropDownList)sender; string ID = ddl.ID; } // Add TextBoxes Control to Placeholder private DropDownList[] RecreateDropDownLists() { DropDownList[] dropDowns = new DropDownList[NumberOfControls]; for (int counter = 0; counter < NumberOfControls; counter++) { DropDownList ddl = new DropDownList(); SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); ddl.ID = "DropDownListID" + counter.ToString(); int NumControls = targettable.Length; DataTable dt = new DataTable(); dt.Load(dr2); ddl.DataValueField = "COLUMN_NAME"; ddl.DataTextField = "COLUMN_NAME"; ddl.DataSource = dt; ddl.SelectedIndexChanged += new EventHandler(ddlList_SelectedIndexChanged); ddl.DataBind(); ddl.AutoPostBack = true; ddl.EnableViewState = false; //Preserves View State info on Postbacks dr2.Close(); ddl.Items.Add("IGNORE"); dropDowns[counter] = ddl; } return dropDowns; } private void CreateLabels() { for (int counter = 0; counter < NumberOfControls; counter++) { Label lbl = new Label(); lbl.ID = "Label" + counter.ToString(); lbl.Text = headers[counter]; lbl.Style["position"] = "absolute"; lbl.Style["top"] = 60 * counter + 10 + "px"; lbl.Style["left"] = 250 + "px"; pnlDisplayData.Controls.Add(lbl); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); } } // Add TextBoxes Control to Placeholder private void RecreateLabels() { for (int counter = 0; counter < NumberOfControls; counter++) { Label lbl = new Label(); lbl.ID = "Label" + counter.ToString(); lbl.Text = headers[counter]; lbl.Style["position"] = "absolute"; lbl.Style["top"] = 60 * counter + 10 + "px"; lbl.Style["left"] = 250 + "px"; pnlDisplayData.Controls.Add(lbl); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); } } // Create TextBoxes and DropDownList data here on postback. protected override void CreateChildControls() { // create the child controls if the server control does not contains child controls this.EnsureChildControls(); // Creates a new ControlCollection. this.CreateControlCollection(); // Here we are recreating controls to persist the ViewState on every post back if (Page.IsPostBack) { RecreateDisplayPanel(); RecreateLabels(); } // Create these conrols when asp.net page is created else { PopulateFileInputTable(); CreateDisplayPanel(); CreateLabels(); } // Prevent dropdownlists and labels from being created again. if (restart == false) { this.ChildControlsCreated = true; } else if (restart == true) { this.ChildControlsCreated = false; } } private void AppendRecords() { switch (targettable) { case "ContactType": for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataCT(myInputFile.Rows[r], ddl_ht); } break; case "Contact": for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataC(myInputFile.Rows[r], ddl_ht); } break; case "AddressType": for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataAT(myInputFile.Rows[r], ddl_ht); } break; default: resultLabel.Text = "You do not have access to modify this table. Please select a different target table and try again."; restart = true; break; //throw new ArgumentOutOfRangeException("targettable type", targettable); } } // Read all the data from TextBoxes and DropDownLists protected void btnSubmit_Click(object sender, System.EventArgs e) { //int cnt = FindOccurence("DropDownListID"); AppendRecords(); pnlDisplayData.Visible = false; btnSubmit.Visible = false; resultLabel.Attributes.Add("style", "align:center"); btnSubmit.Style.Add("top", "auto"); btnSubmit.Style.Add("left", "auto"); btnSubmit.Style.Add("position", "absolute"); int bSubmitPosition = NumberOfControls; btnSubmit.Style.Add("top", System.Convert.ToString(bSubmitPosition)+"px"); resultLabel.Visible = true; Instructions.Visible = false; if (restart == true) { CreateChildControls(); } } private int FindOccurence(string substr) { string reqstr = Request.Form.ToString(); return ((reqstr.Length - reqstr.Replace(substr, "").Length) / substr.Length); } #region Web Form Designer generated code override protected void OnInit(EventArgs e) { // // CODEGEN: This call is required by the ASP.NET Web Form Designer. // InitializeComponent(); base.OnInit(e); } /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { } #endregion } //}

    Read the article

  • Get your TFS 2012 task board demo ready in under 1 minute

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/  | Download | Source Code | Report a Bug | Ideas In this blog post, I’ll show you how to use the ‘TfsDemoSetup’ application to configure and setup the TFS 2012 task board for a demo in well less than 1 minute Step 1 – Note what you get with a newly created Team Project Create a new Team Project on TFS Preview         2. Click Create Project         3. The project creation has completed        4. Open the team web access and have a look at the home page Note – Since I created the project I am the only Team Member       A default Team by the name AdventureWorks Team has been created       A few sprints have been assigned to the default team but no dates for sprint start and end have been specified        A default Area Path for the team is missing       Step 2. Download the TFS Demo Setup Console application from Codeplex 1. Navigate to the TFS Demo Setup project on codeplex https://tfsdemosetup.codeplex.com/       2. Download Instructions and TFSDemo_<version>      3. Follow the steps in the Instructions.txt file      4. Unzip TFSDemo_<version> and open the target folder. Two important files in this folder, DemoDictionary.xml – This file contains the settings using which the demo environment will be setup SetupTfsDemo.exe – This will run the TFS demo environment setup application       Step 3 – Configure the setup (i.e. team name, members, sprint dates, etc) 1. Open up DemoDictionary.xml      2. Walkthrough DemoDictionary.xml             a. Basic Team Details         <Name> – Specify the name of the team         <Description> – Specify a description to go with the team         <SetAsDefaultTeam> – This accepts a value “true/false” when set to true, the newly created team will be set as the default team in the project         <BacklogIterationPath> – Specify a backlog iteration path for the team     b. Iterations – The iterations you specify here will be set as the Teams iterations        <Iterations> – Accepts multiple <Iteration> nodes.        <Iteration> – This is the most granular level of an Iteration        <Path> – The path to the sprint, sample values, Release 1\Sprint 1 or Release 2\Sprint 2        <StartDate> – The sprint start date, this accepts the format yyyy-MM-dd        <FinishDate> – The sprint finish date, this accepts the format yyyy-MM-dd     c. Team Members – Team Members that need to be added to the newly created team will be added under this section         <TeamMembers> – Accepts multiple <TeamMember> nodes.         <TeamMember> – This is the most granular level of a Team Member         <User> – This accepts the username, if you are running this against TFSPreview then the live id of the user will need to be passed. If you are running this against TFS Server then the user id i.e. Domain\UserName will need to be passed          <Team> – Specify the name of the team that you want the user to be assigned to.     d. WorkItems – This section will allow you to add work items (product backlog Items and linked tasks) to the current sprint of the team         <WorkItems> – Accepts multiple <WorkItem> nodes.         <WorkItem> – Accepts one <ProductBacklogItem> and multiple <Task> nodes         <ProductBacklogItem> – Used to create a Product Backlog Item type work item               <Title> – The title of the Product Backlog Item               <Description> – The description of the Product Backlog Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <Effort> – The total effort required to complete the Product Backlog Item         <Task> – Used to create a linked task to the Product Backlog type work item               <Title> – The title of the task type work item               <Description> – The description of the Task Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <RemainingWork> – The remaining effort to complete the task type work item Step 4 – Setup the demo environment against the newly created Team Project 1. Run SetupTfsDemo.exe    2. Enter Y or y on the prompt to continue setting up TFS Demo setup.     3. Select the newly created Team project, for this blogpost I had created the Team Project – AdventureWorks, so that is what I’ll select in the Connect to TFS Server pop up    3. Click Connect and follow the messages that are written to the console application       Step 5 – Validate that the Demo environment is set up as per the configuration 1. The team web access is all lit up You have a Sprint, a burn down chart, team members…    2. The team Demo has been added and has been set up as the default team    3. The Sprint Backlog Iteration path, Sprints and Sprint start and finish dates have been set    4. The default area path has been setup    5. Taskboard – Backlog items view    6. Taskboard – Team members view      Step 6 – Exception Handling! 1. This solution has been tested against TFS 2012 Service/Server for the Scrum 2.1 process template. 2. You are likely to run into an exception if you mess up the config file 3. If the team already exists and you run the console app to set up the team (with the same name) you will run into exceptions. Please remember this is just an alpha release, if you have any feedback please leave a comment! Didn’t I say that it would just take 1 minute, Enjoy!

    Read the article

  • RiverTrail - JavaScript GPPGU Data Parallelism

    - by JoshReuben
    Where is WebCL ? The Khronos WebCL working group is working on a JavaScript binding to the OpenCL standard so that HTML 5 compliant browsers can host GPGPU web apps – e.g. for image processing or physics for WebGL games - http://www.khronos.org/webcl/ . While Nokia & Samsung have some protype WebCL APIs, Intel has one-upped them with a higher level of abstraction: RiverTrail. Intro to RiverTrail Intel Labs JavaScript RiverTrail provides GPU accelerated SIMD data-parallelism in web applications via a familiar JavaScript programming paradigm. It extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. With its high-level JS API, programmers do not have to learn a new language or explicitly manage threads, orchestrate shared data synchronization or scheduling. It has been proposed as a draft specification to ECMA a (known as ECMA strawman). RiverTrail runs in all popular browsers (except I.E. of course). To get started, download a prebuilt version https://github.com/downloads/RiverTrail/RiverTrail/rivertrail-0.17.xpi , install Intel's OpenCL SDK http://www.intel.com/go/opencl and try out the interactive River Trail shell http://rivertrail.github.com/interactive For a video overview, see  http://www.youtube.com/watch?v=jueg6zB5XaM . ParallelArray the ParallelArray type is the central component of this API & is a JS object that contains ordered collections of scalars – i.e. multidimensional uniform arrays. A shape property describes the dimensionality and size– e.g. a 2D RGBA image will have shape [height, width, 4]. ParallelArrays are immutable & fluent – they are manipulated by invoking methods on them which produce new ParallelArray objects. ParallelArray supports several constructors over arrays, functions & even the canvas. // Create an empty Parallel Array var pa = new ParallelArray(); // pa0 = <>   // Create a ParallelArray out of a nested JS array. // Note that the inner arrays are also ParallelArrays var pa = new ParallelArray([ [0,1], [2,3], [4,5] ]); // pa1 = <<0,1>, <2,3>, <4.5>>   // Create a two-dimensional ParallelArray with shape [3, 2] using the comprehension constructor var pa = new ParallelArray([3, 2], function(iv){return iv[0] * iv[1];}); // pa7 = <<0,0>, <0,1>, <0,2>>   // Create a ParallelArray from canvas.  This creates a PA with shape [w, h, 4], var pa = new ParallelArray(canvas); // pa8 = CanvasPixelArray   ParallelArray exposes fluent API functions that take an elemental JS function for data manipulation: map, combine, scan, filter, and scatter that return a new ParallelArray. Other functions are scalar - reduce  returns a scalar value & get returns the value located at a given index. The onus is on the developer to ensure that the elemental function does not defeat data parallelization optimization (avoid global var manipulation, recursion). For reduce & scan, order is not guaranteed - the onus is on the dev to provide an elemental function that is commutative and associative so that scan will be deterministic – E.g. Sum is associative, but Avg is not. map Applies a provided elemental function to each element of the source array and stores the result in the corresponding position in the result array. The map method is shape preserving & index free - can not inspect neighboring values. // Adding one to each element. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.map(function inc(v) {     return v+1; }); //<2,3,4,5,6> combine Combine is similar to map, except an index is provided. This allows elemental functions to access elements from the source array relative to the one at the current index position. While the map method operates on the outermost dimension only, combine, can choose how deep to traverse - it provides a depth argument to specify the number of dimensions it iterates over. The elemental function of combine accesses the source array & the current index within it - element is computed by calling the get method of the source ParallelArray object with index i as argument. It requires more code but is more expressive. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.combine(function inc(i) { return this.get(i)+1; }); reduce reduces the elements from an array to a single scalar result – e.g. Sum. // Calculate the sum of the elements var source = new ParallelArray([1,2,3,4,5]); var sum = source.reduce(function plus(a,b) { return a+b; }); scan Like reduce, but stores the intermediate results – return a ParallelArray whose ith elements is the results of using the elemental function to reduce the elements between 0 and I in the original ParallelArray. // do a partial sum var source = new ParallelArray([1,2,3,4,5]); var psum = source.scan(function plus(a,b) { return a+b; }); //<1, 3, 6, 10, 15> scatter a reordering function - specify for a certain source index where it should be stored in the result array. An optional conflict function can prevent an exception if two source values are assigned the same position of the result: var source = new ParallelArray([1,2,3,4,5]); var reorder = source.scatter([4,0,3,1,2]); // <2, 4, 5, 3, 1> // if there is a conflict use the max. use 33 as a default value. var reorder = source.scatter([4,0,3,4,2], 33, function max(a, b) {return a>b?a:b; }); //<2, 33, 5, 3, 4> filter // filter out values that are not even var source = new ParallelArray([1,2,3,4,5]); var even = source.filter(function even(iv) { return (this.get(iv) % 2) == 0; }); // <2,4> Flatten used to collapse the outer dimensions of an array into a single dimension. pa = new ParallelArray([ [1,2], [3,4] ]); // <<1,2>,<3,4>> pa.flatten(); // <1,2,3,4> Partition used to restore the original shape of the array. var pa = new ParallelArray([1,2,3,4]); // <1,2,3,4> pa.partition(2); // <<1,2>,<3,4>> Get return value found at the indices or undefined if no such value exists. var pa = new ParallelArray([0,1,2,3,4], [10,11,12,13,14], [20,21,22,23,24]) pa.get([1,1]); // 11 pa.get([1]); // <10,11,12,13,14>

    Read the article

  • What does Ruby have that Python doesn't, and vice versa?

    - by Lennart Regebro
    There is a lot of discussions of Python vs Ruby, and I all find them completely unhelpful, because they all turn around why feature X sucks in language Y, or that claim language Y doesn't have X, although in fact it does. I also know exactly why I prefer Python, but that's also subjective, and wouldn't help anybody choosing, as they might not have the same tastes in development as I do. It would therefore be interesting to list the differences, objectively. So no "Python's lambdas sucks". Instead explain what Ruby's lambdas can do that Python's can't. No subjectivity. Example code is good! Don't have several differences in one answer, please. And vote up the ones you know are correct, and down those you know are incorrect (or are subjective). Also, differences in syntax is not interesting. We know Python does with indentation what Ruby does with brackets and ends, and that @ is called self in Python. UPDATE: This is now a community wiki, so we can add the big differences here. Ruby has a class reference in the class body In Ruby you have a reference to the class (self) already in the class body. In Python you don't have a reference to the class until after the class construction is finished. An example: class Kaka puts self end self in this case is the class, and this code would print out "Kaka". There is no way to print out the class name or in other ways access the class from the class definition body in Python. All classes are mutable in Ruby This lets you develop extensions to core classes. Here's an example of a rails extension: class String def starts_with?(other) head = self[0, other.length] head == other end end Ruby has Perl-like scripting features Ruby has first class regexps, $-variables, the awk/perl line by line input loop and other features that make it more suited to writing small shell scripts that munge text files or act as glue code for other programs. Ruby has first class continuations Thanks to the callcc statement. In Python you can create continuations by various techniques, but there is no support built in to the language. Ruby has blocks With the "do" statement you can create a multi-line anonymous function in Ruby, which will be passed in as an argument into the method in front of do, and called from there. In Python you would instead do this either by passing a method or with generators. Ruby: amethod { |here| many=lines+of+code goes(here) } Python: def function(here): many=lines+of+code goes(here) amethod(function) Interestingly, the convenience statement in Ruby for calling a block is called "yield", which in Python will create a generator. Ruby: def themethod yield 5 end themethod do |foo| puts foo end Python: def themethod(): yield 5 for foo in themethod: print foo Although the principles are different, the result is strikingly similar. Python has built-in generators (which are used like Ruby blocks, as noted above) Python has support for generators in the language. In Ruby you could use the generator module that uses continuations to create a generator from a block. Or, you could just use a block/proc/lambda! Moreover, in Ruby 1.9 Fibers are, and can be used as, generators. docs.python.org has this generator example: def reverse(data): for index in range(len(data)-1, -1, -1): yield data[index] Contrast this with the above block examples. Python has flexible name space handling In Ruby, when you import a file with require, all the things defined in that file will end up in your global namespace. This causes namespace pollution. The solution to that is Rubys modules. But if you create a namespace with a module, then you have to use that namespace to access the contained classes. In Python, the file is a module, and you can import its contained names with from themodule import *, thereby polluting the namespace if you want. But you can also import just selected names with from themodule import aname, another or you can simply import themodule and then access the names with themodule.aname. If you want more levels in your namespace you can have packages, which are directories with modules and an __init__.py file. Python has docstrings Docstrings are strings that are attached to modules, functions and methods and can be introspected at runtime. This helps for creating such things as the help command and automatic documentation. def frobnicate(bar): """frobnicate takes a bar and frobnicates it >>> bar = Bar() >>> bar.is_frobnicated() False >>> frobnicate(bar) >>> bar.is_frobnicated() True """ Python has more libraries Python has a vast amount of available modules and bindings for libraries. Python has multiple inheritance Ruby does not ("on purpose" -- see Ruby's website, see here how it's done in Ruby). It does reuse the module concept as a sort of abstract classes. Python has list/dict comprehensions Python: res = [x*x for x in range(1, 10)] Ruby: res = (0..9).map { |x| x * x } Python: >>> (x*x for x in range(10)) <generator object <genexpr> at 0xb7c1ccd4> >>> list(_) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] Ruby: p = proc { |x| x * x } (0..9).map(&p) Python: >>> {x:str(y*y) for x,y in {1:2, 3:4}.items()} {1: '4', 3: '16'} Ruby: >> Hash[{1=>2, 3=>4}.map{|x,y| [x,(y*y).to_s]}] => {1=>"4", 3=>"16"} Python has decorators Things similar to decorators can be created in Ruby, and it can also be argued that they aren't as necessary as in Python.

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • File Access problems with SLES 10 SP2 OES2 SP1

    - by Blackhawk131
    We have identified a couple of repeatable, demonstrable scenarios with unexplained rejected folder access on our servers for Mac users. Hopefully, this can be presented to Novell for a solution. What we did to demonstrate scenario 1; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder 4. on the Mac in that central location drag the created folder to the Mac desktop, this should work fine, no problem 5. on the PC rename that folder 6. on the Mac drag a file to that renamed folder, this should error with the following message; a. You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? b. Select skip, response is the filename is copied to the location with zero or small byte size. Try opening it and you get file is corrupted error message. What we did to demonstrate scenario 2; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder then create a subfolder 4. copy some content into the subfolder 5. on the Mac in that central location drag the created top level folder to the Mac desktop, this should work fine, no problem 6. on the PC rename that subfolder 7. on the Mac drag that top level folder to the Mac desktop, this should error on the Mac with the following; a. The operation cannot be completed because you do not have sufficient privileges for b. The operation cannot be completed because you do not have sufficient privileges for 8. on the Mac, if you open that subfolder you can see the file copied in step 4 above but, you can not open that file, you get the following message if you try; a. There was an error opening this document. You do not have permission to open this file. 9. on the PC drag some content into the top level folder 10. on the Mac you can open that file directly from the server or copy it locally, no problem, however-the subfolder is still corrupted or locked, whichever 11. on the PC rename the top level folder 12. on the Mac that same file just opened in step 10 above is now not accessible, get the following message; a. The document could not be opened. I have observed some variances in the above. For instance, a change on the PC side may take a moment before you can observer or act on the Mac side - kind of like the server is slow to respond. Also, the error message may vary. However, the key is once a folder, or subfolder, gets renamed by a PC, Mac problems commence. The solution is to create a new folder from a PC and copy the contents of the corrupted folder to the new folder and not rename the folder name. This has to be done on a PC because the corrupted folder is not accessible by a Mac user. Another problem that dovetails with the above is that we know certain characters are not allowed for PC folder or filenames. If a Mac user creates a folder with a slash in the file name, from the PC the user does not see that slash in the name. As soon as the PC user copies a file to that folder, the Mac user is locked from that folder. Will get the following error message; - Sorry, the operation could not be completed because an unexpected error occurred. - (Error code - 50) In addition to the above mentioned character issue with folders, the problem is more evil with filenames. If, for example, you create a file with a slash in the filename on a Mac and copy it to the server you will get the following error message; - You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? Select either Stop or Skip buttons. It does not matter which button is selected. The file name gets copied to the destination location at a reduced size. Depending on the file type, the icon associated with the file may or may not be present. Furthermore, if you open that file on the server you will get the following message; - Couldnt open the file. It may be corrupt or a file format that doesnt recognize. From the users perspective, if they are not observant of the icon or file size, they may disregard the error message and think their file has copied as intended. Only later do they discover the file is corrupt if they open that file. I want to make a note on this problem. It is the PC causing the issue. You can change folder and file names all day on a MAC and you don't have a problem as long as a character is not the issue. Once you change the file name or folder name from a PC the entire folder structure from that level down is corrupted. But it has to be resolved from a PC by creating a new folder and copying the contents to the new folder like stated above. Is something not configured correctly? SUSE Linux Enterprise Server 10 (x86_64) VERSION = 10 PATCHLEVEL = 2 LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64" Novell Open Enterprise Server 2.0.1 (x86_64) VERSION = 2.0.1 PATCHLEVEL = 1 BUILD Note: We use Novell clients on all windows systems to connect to the servers for file access and network storage. We use AFP to allow OSx systems to connect to servers.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #031

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Find Table without Clustered Index – Find Table with no Primary Key Clustered index is very important concept for any table. They impact the performance very heavily. Here is a quick script to find tables without a clustered index. Replace TEXT with VARCHAR(MAX) – Stop using TEXT, NTEXT, IMAGE Data Types Question: “Is VARCHAR (MAX) big enough to store the TEXT field?” Answer: “Yes, VARCHAR(MAX) is big enough to accommodate TEXT field. TEXT, NTEXT and IMAGE data types of SQL Server 2000 will be deprecated in a future version of SQL Server, SQL Server 2005 provides backward compatibility to data types but it is recommended to use new data types which are VARHCAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX).” Limiting Result Sets by Using TABLESAMPLE – Examples Introduced in SQL Server 2005, TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. User Defined Functions (UDF) Limitations UDF have its own advantage and usage but in this article we will see the limitation of UDF. Things UDF can not do and why Stored Procedure are considered as more flexible then UDFs. Stored Procedure are more flexibility then User Defined Functions(UDF). However, this blog post is a good read to know what are the limitations of UDF. Change Database Compatible Level – Backward Compatibility For a long time SQL Server stayed on the compatibility level of 80 which is of SQL Server 2000. However, as soon as SQL Server 2005 introduced the issue of compatibility was quite a major issue. Since that time MS has been releasing the versions at every 2-3 years, changing compatibility is a ever popular topic. In this blog post, we learn how we can do the same using T-SQL. We can also do the same using SSMS and here is the blog post for the same: Change Database Compatible Level – Backward Compatibility – Part 2 – Management Studio. Constraint on VARCHAR(MAX) Field To Limit It Certain Length How can I limit the VARCHAR(MAX) field with maximum length of 12500 characters only. His Question was valid as our application was allowed 12500 characters. First of all – this requirement is bit strange but if someone wants to do the same, they can do it as described in this blog post. 2008 UNPIVOT Table Example Understanding UNPIVOT can be very complicated at times. In this blog post, I have attempted to explain the same concept in very simple words. Create Default Constraint Over Table Column A simple straight to script blog post – I still use this blog quite many times for my own reference. UDF – Get the Day of the Week Function It took me 4 iteration to find this very simple function which can immediately get the day of the week in a single line. 2009 Find Hostname and Current Logged In User Name There are two tricks listed in this blog post where users can find out the hostname and current logged user name immediately and very easily. Interesting Observation of Logon Trigger On All Servers When I was doing a project, I made an interesting observation of executing a logon trigger multiple times. It was absolutely unexpected for me! As I was logging only once, naturally, I was expecting the entry only once. However, it did it multiple times on different threads – indeed an eccentric phenomenon at first sight! Difference Between Candidate Keys and Primary Key One needs to be very careful in selecting the Primary Key as an incorrect selection can adversely impact the database architect and future normalization. For a Candidate Key to qualify as a Primary Key, it should be Non-NULL and unique in any domain. I have observed quite often that Primary Keys are seldom changed. I would like to have your feedback on not changing a Primary Key. Create Multiple Filegroup For Single Database Why should one create multiple file group for any database and what are the advantages of the same. In this blog post, I explain the same in detail. List All Objects Created on All Filegroups in Database In this blog post we discuss the essential question – “How can I find which object belongs to which filegroup. Is there any way to know this?” 2010 DATE and TIME in SQL Server 2008 When DATE is converted to DATETIME it adds the of midnight. When TIME is converted to DATETIME it adds the date of 1900 and it is something one wants to consider if you are going to run scripts from SQL Server 2008 to earlier version with CONVERT. Disabled Index and Update Statistics If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexes are also updated. Precision of SMALLDATETIME – A 1 Minute Precision The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. 2011 Getting Columns Headers without Result Data – SET FMTONLY ON SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. Copy Database from Instance to Another Instance – Copy Paste in SQL Server SQL Server has a feature which copy database from one database to another database and it can be automated as well using SSIS. Make sure you have SQL Server Agent Turned on as this feature will create a job. Puzzle – SELECT * vs SELECT COUNT(*) If you have ever wondered SELECT * gives error when executed alone but SELECT COUNT(*) does not. Why? in that case, you should read this blog post. Creating All New Database with Full Recovery Model This blog post is very based on very interesting story where the user wants to do something by default for every single new database created. Model database is a secret weapon which should be used very carefully and with proper evalution. If used carefully this can be a very much beneficiary when we need a newly created database behave in certain fashion. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Read five part series on developer training subject Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Monday, August 11, 2014

    CodePlex Daily Summary for Monday, August 11, 2014Popular ReleasesSpace Engineers Server Manager: SESM V1.15: V1.15 - Updated Quartz library - Correct a bug in the new mod managment - Added a warning if you have backup enabled on a server but no static map configuredAspose for Apache POI: Missing Features of Apache POI SS - v 1.2: Release contain the Missing Features in Apache POI SS SDK in comparison with Aspose.Cells What's New ? Following Examples: Create Pivot Charts Detect Merged Cells Sort Data Printing Workbooks Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.AngularGo (SPA Project Template): AngularGo.VS2013.vsix: First ReleaseTouchmote: Touchmote 1.0 beta 13: Changes Less GPU usage Works together with other Xbox 360 controls Bug fixesPublic Key Infrastructure PowerShell module: PowerShell PKI Module v3.0: Important: I would like to hear more about what you are thinking about the project? I appreciate that you like it (2000 downloads over past 6 months), but may be you have to say something? What do you dislike in the module? Maybe you would love to see some new functionality? Tell, what you think! Installation guide:Use default installation path to install this module for current user only. To install this module for all users — enable "Install for all users" check-box in installation UI ...Modern UI for WPF: Modern UI 1.0.6: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. BREAKING CHANGE LinkGroup.GroupName renamed to GroupKey NEW FEATURES Improved rendering on high DPI screens, including support for per-monitor DPI awareness available in Windows 8.1 (see also Per-monitor DPI awareness) New ModernProgressRing control with 8 builtin styles New LinkCommands.NavigateLink routed command New Visual Studio project templates 'Modern UI WPF App' and 'Modern UI W...ClosedXML - The easy way to OpenXML: ClosedXML 0.74.0: Multiple thread safe improvements including AdjustToContents XLHelper XLColor_Static IntergerExtensions.ToStringLookup Exception now thrown when saving a workbook with no sheets, instead of creating a corrupt workbook Fix for hyperlinks with non-ASCII Characters Added basic workbook protection Fix for error thrown, when a spreadsheet contained comments and images Fix to Trim function Fix Invalid operation Exception thrown when the formula functions MAX, MIN, and AVG referenc...SEToolbox: SEToolbox 01.042.019 Release 1: Added RadioAntenna broadcast name to ship name detail. Added two additional columns for Asteroid material generation for Asteroid Fields. Added Mass and Block number columns to main display. Added Ellipsis to some columns on main display to reduce name confusion. Added correct SE version number in file when saving. Re-added in reattaching Motor when drag/dropping or importing ships (KeenSH have added RotorEntityId back in after removing it months ago). Added option to export and r...jQuery List DragSort: jQuery List DragSort 0.5.2: Fixed scrollContainer removing deprecated use of $.browser so should now work with latest version of jQuery. Added the ability to return false in dragEnd to revert sort order Project changes Added nuget package for dragsort https://www.nuget.org/packages/dragsort Converted repository from SVN to MercurialBraintree Client Library: Braintree 2.32.0: Allow credit card verification options to be passed outside of the nonce for PaymentMethod.create Allow billingaddress parameters and billingaddress_id to be passed outside of the nonce for PaymentMethod.create Add Subscriptions to paypal accounts Add PaymentMethod.update Add failonduplicatepaymentmethod option to PaymentMethod.create Add support for dispute webhooksThe Mario Kart 8 App: V1.0.2.1: First Codeplex release. WINDOWS INSTALLER ONLYAspose Java for Docx4j: Aspose.Words vs Docx4j - v 1.0: Release contain the Code Comparison for Features in Docx4j SDK and Aspose.Words What's New ?Following Examples: Accessing Document Properties Add Bookmarks Convert to Formats Delete Bookmarks Working with Comments Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.File System Security PowerShell Module: NTFSSecurity 2.4.1: Add-Access and Remove-Access now take multiple accoutsYourSqlDba: YourSqlDba 5.2.1.: This version improves alert message that comes a while after you install the script. First it says to get it from YourSqlDba.CodePlex.com If you don't want to update now, just-rerun the script from your installed version. To get actual version running just execute install.PrintVersionInfo. . You can go to source code / history and click on change set 72957 to see changes in the script.Manipulator: Manipulator: manipulatorXNB filetype plugin for Paint.NET: Paint.NET XNB plugin v0.4.0.0: CHANGELOG Reverted old incomplete changes. Updated library for compatibility with Paint .NET 4. Updated project to NET 4.5. Updated version to 0.4.0.0. INSTALLATION INSTRUCTIONS Extract the ZIP file to your Paint.NET\FileTypes folder.EdiFabric: Release 4.1: Changed MessageContextWix# (WixSharp) - managed interface for WiX: Release 1.0.0.0: Release 1.0.0.0 Custom UI Custom MSI Dialog Custom CLR Dialog External UIMath.NET Numerics: Math.NET Numerics v3.2.0: Linear Algebra: Vector.Map2 (map2 in F#), storage-optimized Linear Algebra: fix RemoveColumn/Row early index bound check (was not strict enough) Statistics: Entropy ~Jeff Mastry Interpolation: use Array.BinarySearch instead of local implementation ~Candy Chiu Resources: fix a corrupted exception message string Portable Build: support .Net 4.0 as well by using profile 328 instead of 344. .Net 3.5: F# extensions now support .Net 3.5 as well .Net 3.5: NuGet package now contains pro...babelua: 1.6.5.1: V1.6.5.1 - 2014.8.7New feature: Formatting code; Stability improvement: fix a bug that pop up error "System.Net.WebResponse EndGetResponse";New ProjectsDouDou: a little project.Dynamic MVC: Dynamically generate views from your model objects for a data centric MVC application.EasyDb - Simple Data Access: EasyDb is a simple library for data access that allows you to write less code.ExpressToAbroad: just go!!!!!Full Silverlight Web Video/Voice Conferencing: The Goal of this project is to provide complete Open Source (Voice/Video Chatting Client/Server) Modules Using SilverlightGaia: Gaia is an app for Windows plataform, Gaia is like Siri and Google Now or Betty but Gaia use only text commands.pxctest: pxctestSTACS: Career Management System for MIT by Team "STACS"StrongWorld: StrongWorld.WebSuiteXevas Tools: Xevas is a professional coders group of 'Nimbuzz'. We make all tools for worldwide users of nimbuzz at free of cost.????????: ????????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ???????????????: ????????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ????????????????: ?????????

    Read the article

  • Embedding generic sql queries into c# program

    - by Pooja Balkundi
    Okay referring to my first question code in the main, I want the user to enter employee name at runtime and then i take this name which user has entered and compare it with the e_name of my emp table , if it exists i want to display all information of that employee , how can I achieve this ? using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using MySql.Data.MySqlClient; namespace ConnectCsharppToMySQL { public class DBConnect { private MySqlConnection connection; private string server; private string database; private string uid; private string password; string name; //Constructor public DBConnect() { Initialize(); } //Initialize values private void Initialize() { server = "localhost"; database = "test"; uid = "root"; password = ""; string connectionString; connectionString = "SERVER=" + server + ";" + "DATABASE=" + database + ";" + "UID=" + uid + ";" + "PASSWORD=" + password + ";"; connection = new MySqlConnection(connectionString); } //open connection to database private bool OpenConnection() { try { connection.Open(); return true; } catch (MySqlException ex) { //When handling errors, you can your application's response based //on the error number. //The two most common error numbers when connecting are as follows: //0: Cannot connect to server. //1045: Invalid user name and/or password. switch (ex.Number) { case 0: MessageBox.Show("Cannot connect to server. Contact administrator"); break; case 1045: MessageBox.Show("Invalid username/password, please try again"); break; } return false; } } //Close connection private bool CloseConnection() { try { connection.Close(); return true; } catch (MySqlException ex) { MessageBox.Show(ex.Message); return false; } } //Insert statement public void Insert() { string query = "INSERT INTO emp (e_name, age) VALUES('Pooja R', '21')"; //open connection if (this.OpenConnection() == true) { //create command and assign the query and connection from the constructor MySqlCommand cmd = new MySqlCommand(query, connection); //Execute command cmd.ExecuteNonQuery(); //close connection this.CloseConnection(); } } //Update statement public void Update() { string query = "UPDATE emp SET e_name='Peachy', age='22' WHERE e_name='Pooja R'"; //Open connection if (this.OpenConnection() == true) { //create mysql command MySqlCommand cmd = new MySqlCommand(); //Assign the query using CommandText cmd.CommandText = query; //Assign the connection using Connection cmd.Connection = connection; //Execute query cmd.ExecuteNonQuery(); //close connection this.CloseConnection(); } } //Select statement public List<string>[] Select() { string query = "SELECT * FROM emp where e_name=(/*I WANT USER ENTERED NAME TO GET INSERTED HERE*/)"; //Create a list to store the result List<string>[] list = new List<string>[3]; list[0] = new List<string>(); list[1] = new List<string>(); list[2] = new List<string>(); //Open connection if (this.OpenConnection() == true) { //Create Command MySqlCommand cmd = new MySqlCommand(query, connection); //Create a data reader and Execute the command MySqlDataReader dataReader = cmd.ExecuteReader(); //Read the data and store them in the list while (dataReader.Read()) { list[0].Add(dataReader["e_id"] + ""); list[1].Add(dataReader["e_name"] + ""); list[2].Add(dataReader["age"] + ""); } //close Data Reader dataReader.Close(); //close Connection this.CloseConnection(); //return list to be displayed return list; } else { return list; } } public static void Main(String[] args) { DBConnect db1 = new DBConnect(); Console.WriteLine("Initializing"); db1.Initialize(); Console.WriteLine("Search :"); Console.WriteLine("Enter the employee name"); db1.name = Console.ReadLine(); db1.Select(); Console.ReadLine(); } } }

    Read the article

  • Can this PHP version of SPICE be improved?

    - by Noctis Skytower
    I do not know much about PHP's standard library of functions and was wondering if the following code can be improved in any way. The implementation should yield the same results, the API should remain as it is, but ways to make is more PHP-ish would be greatly appreciated. This is a custom encryption library. Code <?php /*************************************** Create random major and minor SPICE key. ***************************************/ function crypt_major() { $all = range("\x00", "\xFF"); shuffle($all); $major_key = implode("", $all); return $major_key; } function crypt_minor() { $sample = array(); do { array_push($sample, 0, 1, 2, 3); } while (count($sample) != 256); shuffle($sample); $list = array(); for ($index = 0; $index < 64; $index++) { $b12 = $sample[$index * 4] << 6; $b34 = $sample[$index * 4 + 1] << 4; $b56 = $sample[$index * 4 + 2] << 2; $b78 = $sample[$index * 4 + 3]; array_push($list, $b12 + $b34 + $b56 + $b78); } $minor_key = implode("", array_map("chr", $list)); return $minor_key; } /*************************************** Create the SPICE key via the given name. ***************************************/ function named_major($name) { srand(crc32($name)); return crypt_major(); } function named_minor($name) { srand(crc32($name)); return crypt_minor(); } /*************************************** Check validity for major and minor keys. ***************************************/ function _check_major($key) { if (is_string($key) && strlen($key) == 256) { foreach (range("\x00", "\xFF") as $char) { if (substr_count($key, $char) == 0) { return FALSE; } } return TRUE; } return FALSE; } function _check_minor($key) { if (is_string($key) && strlen($key) == 64) { $indexs = array(); foreach (array_map("ord", str_split($key)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($indexs, ($byte >> $shift) & 3); } } $dict = array_count_values($indexs); foreach (range(0, 3) as $index) { if ($dict[$index] != 64) { return FALSE; } } return TRUE; } return FALSE; } /*************************************** Create encode maps for encode functions. ***************************************/ function _encode_map_1($major) { return array_map("ord", str_split($major)); } function _encode_map_2($minor) { $map_2 = array(array(), array(), array(), array()); $list = array(); foreach (array_map("ord", str_split($minor)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($list, ($byte >> $shift) & 3); } } for ($byte = 0; $byte < 256; $byte++) { array_push($map_2[$list[$byte]], chr($byte)); } return $map_2; } /*************************************** Create decode maps for decode functions. ***************************************/ function _decode_map_1($minor) { $map_1 = array(); foreach (array_map("ord", str_split($minor)) as $byte) { foreach (range(6, 0, 2) as $shift) { array_push($map_1, ($byte >> $shift) & 3); } } return $map_1; }function _decode_map_2($major) { $map_2 = array(); $temp = array_map("ord", str_split($major)); for ($byte = 0; $byte < 256; $byte++) { $map_2[$temp[$byte]] = chr($byte); } return $map_2; } /*************************************** Encrypt or decrypt the string with maps. ***************************************/ function _encode($string, $map_1, $map_2) { $cache = ""; foreach (str_split($string) as $char) { $byte = $map_1[ord($char)]; foreach (range(6, 0, 2) as $shift) { $cache .= $map_2[($byte >> $shift) & 3][mt_rand(0, 63)]; } } return $cache; } function _decode($string, $map_1, $map_2) { $cache = ""; $temp = str_split($string); for ($iter = 0; $iter < strlen($string) / 4; $iter++) { $b12 = $map_1[ord($temp[$iter * 4])] << 6; $b34 = $map_1[ord($temp[$iter * 4 + 1])] << 4; $b56 = $map_1[ord($temp[$iter * 4 + 2])] << 2; $b78 = $map_1[ord($temp[$iter * 4 + 3])]; $cache .= $map_2[$b12 + $b34 + $b56 + $b78]; } return $cache; } /*************************************** This is the public interface for coding. ***************************************/ function encode_string($string, $major, $minor) { if (is_string($string)) { if (_check_major($major) && _check_minor($minor)) { $map_1 = _encode_map_1($major); $map_2 = _encode_map_2($minor); return _encode($string, $map_1, $map_2); } } return FALSE; } function decode_string($string, $major, $minor) { if (is_string($string) && strlen($string) % 4 == 0) { if (_check_major($major) && _check_minor($minor)) { $map_1 = _decode_map_1($minor); $map_2 = _decode_map_2($major); return _decode($string, $map_1, $map_2); } } return FALSE; } ?> This is a sample showing how the code is being used. Hex editors may be of help with the input / output. Example <?php # get and process all of the form data @ $input = htmlspecialchars($_POST["input"]); @ $majorname = htmlspecialchars($_POST["majorname"]); @ $minorname = htmlspecialchars($_POST["minorname"]); @ $majorkey = htmlspecialchars($_POST["majorkey"]); @ $minorkey = htmlspecialchars($_POST["minorkey"]); @ $output = htmlspecialchars($_POST["output"]); # process the submissions by operation # CREATE @ $operation = $_POST["operation"]; if ($operation == "Create") { if (strlen($_POST["majorname"]) == 0) { $majorkey = bin2hex(crypt_major()); } if (strlen($_POST["minorname"]) == 0) { $minorkey = bin2hex(crypt_minor()); } if (strlen($_POST["majorname"]) != 0) { $majorkey = bin2hex(named_major($_POST["majorname"])); } if (strlen($_POST["minorname"]) != 0) { $minorkey = bin2hex(named_minor($_POST["minorname"])); } } # ENCRYPT or DECRYPT function is_hex($char) { if ($char == "0"): return TRUE; elseif ($char == "1"): return TRUE; elseif ($char == "2"): return TRUE; elseif ($char == "3"): return TRUE; elseif ($char == "4"): return TRUE; elseif ($char == "5"): return TRUE; elseif ($char == "6"): return TRUE; elseif ($char == "7"): return TRUE; elseif ($char == "8"): return TRUE; elseif ($char == "9"): return TRUE; elseif ($char == "a"): return TRUE; elseif ($char == "b"): return TRUE; elseif ($char == "c"): return TRUE; elseif ($char == "d"): return TRUE; elseif ($char == "e"): return TRUE; elseif ($char == "f"): return TRUE; else: return FALSE; endif; } function hex2bin($str) { if (strlen($str) % 2 == 0): $string = strtolower($str); else: $string = strtolower("0" . $str); endif; $cache = ""; $temp = str_split($str); for ($index = 0; $index < count($temp) / 2; $index++) { $h1 = $temp[$index * 2]; if (is_hex($h1)) { $h2 = $temp[$index * 2 + 1]; if (is_hex($h2)) { $cache .= chr(hexdec($h1 . $h2)); } else { return FALSE; } } else { return FALSE; } } return $cache; } if ($operation == "Encrypt" || $operation == "Decrypt") { # CHECK FOR ANY ERROR $errors = array(); if (strlen($_POST["input"]) == 0) { $output = ""; } $binmajor = hex2bin($_POST["majorkey"]); if (strlen($_POST["majorkey"]) == 0) { array_push($errors, "There must be a major key."); } elseif ($binmajor == FALSE) { array_push($errors, "The major key must be in hex."); } elseif (_check_major($binmajor) == FALSE) { array_push($errors, "The major key is corrupt."); } $binminor = hex2bin($_POST["minorkey"]); if (strlen($_POST["minorkey"]) == 0) { array_push($errors, "There must be a minor key."); } elseif ($binminor == FALSE) { array_push($errors, "The minor key must be in hex."); } elseif (_check_minor($binminor) == FALSE) { array_push($errors, "The minor key is corrupt."); } if ($_POST["operation"] == "Decrypt") { $bininput = hex2bin(str_replace("\r", "", str_replace("\n", "", $_POST["input"]))); if ($bininput == FALSE) { if (strlen($_POST["input"]) != 0) { array_push($errors, "The input data must be in hex."); } } elseif (strlen($bininput) % 4 != 0) { array_push($errors, "The input data is corrupt."); } } if (count($errors) != 0) { # ERRORS ARE FOUND $output = "ERROR:"; foreach ($errors as $error) { $output .= "\n" . $error; } } elseif (strlen($_POST["input"]) != 0) { # CONTINUE WORKING if ($_POST["operation"] == "Encrypt") { # ENCRYPT $output = substr(chunk_split(bin2hex(encode_string($_POST["input"], $binmajor, $binminor)), 58), 0, -2); } else { # DECRYPT $output = htmlspecialchars(decode_string($bininput, $binmajor, $binminor)); } } } # echo the form with the values filled echo "<P><TEXTAREA class=maintextarea name=input rows=25 cols=25>" . $input . "</TEXTAREA></P>\n"; echo "<P>Major Name:</P>\n"; echo "<P><INPUT id=textbox1 name=majorname value=\"" . $majorname . "\"></P>\n"; echo "<P>Minor Name:</P>\n"; echo "<P><INPUT id=textbox1 name=minorname value=\"" . $minorname . "\"></P>\n"; echo "<DIV style=\"TEXT-ALIGN: center\"><INPUT class=submit type=submit value=Create name=operation>\n"; echo "</DIV>\n"; echo "<P>Major Key:</P>\n"; echo "<P><INPUT id=textbox1 name=majorkey value=\"" . $majorkey . "\"></P>\n"; echo "<P>Minor Key:</P>\n"; echo "<P><INPUT id=textbox1 name=minorkey value=\"" . $minorkey . "\"></P>\n"; echo "<DIV style=\"TEXT-ALIGN: center\"><INPUT class=submit type=submit value=Encrypt name=operation> \n"; echo "<INPUT class=submit type=submit value=Decrypt name=operation> </DIV>\n"; echo "<P>Result:</P>\n"; echo "<P><TEXTAREA class=maintextarea name=output rows=25 readOnly cols=25>" . $output . "</TEXTAREA></P></DIV></FORM>\n"; ?>

    Read the article

  • &lt;%: %&gt;, HtmlEncode, IHtmlString and MvcHtmlString

    - by Shaun
    One of my colleague and friend, Robin is playing and struggling with the ASP.NET MVC 2 on a project these days while I’m struggling with a annoying client. Since it’s his first time to use ASP.NET MVC he was meetings with a lot of problem and I was very happy to share my experience to him. Yesterday he asked me when he attempted to insert a <br /> element into his page he found that the page was rendered like this which is bad. He found his <br /> was shown as a part of the string rather than creating a new line. After checked a bit in his code I found that it’s because he utilized a new ASP.NET markup supported in .NET 4.0 – “<%: %>”. If you have been using ASP.NET MVC 1 or in .NET 3.5 world it would be very common that using <%= %> to show something on the page from the backend code. But when you do it you must ensure that the string that are going to be displayed should be Html-safe, which means all the Html markups must be encoded. Otherwise this might cause an XSS (cross-site scripting) problem. So that you’d better use the code like this below to display anything on the page. In .NET 4.0 Microsoft introduced a new markup to solve this problem which is <%: %>. It will encode the content automatically so that you will no need to check and verify your code manually for the XSS issue mentioned below. But this also means that it will encode all things, include the Html element you want to be rendered. So I changed his code like this and it worked well. After helped him solved this problem and finished a spreadsheet for my boring project I considered a bit more on the <%: %>. Since it will encode all thing why it renders correctly when we use “<%: Html.TextBox(“name”) %>” to show a text box? As you know the Html.TextBox will render a “<input name="name" id="name" type="text"/>” element on the page. If <%: %> will encode everything it should not display a text box. So I dig into the source code of the MVC and found some comments in the class MvcHtmlString. 1: // In ASP.NET 4, a new syntax <%: %> is being introduced in WebForms pages, where <%: expression %> is equivalent to 2: // <%= HttpUtility.HtmlEncode(expression) %>. The intent of this is to reduce common causes of XSS vulnerabilities 3: // in WebForms pages (WebForms views in the case of MVC). This involves the addition of an interface 4: // System.Web.IHtmlString and a static method overload System.Web.HttpUtility::HtmlEncode(object). The interface 5: // definition is roughly: 6: // public interface IHtmlString { 7: // string ToHtmlString(); 8: // } 9: // And the HtmlEncode(object) logic is roughly: 10: // - If the input argument is an IHtmlString, return argument.ToHtmlString(), 11: // - Otherwise, return HtmlEncode(Convert.ToString(argument)). 12: // 13: // Unfortunately this has the effect that calling <%: Html.SomeHelper() %> in an MVC application running on .NET 4 14: // will end up encoding output that is already HTML-safe. As a result, we're changing out HTML helpers to return 15: // MvcHtmlString where appropriate. <%= Html.SomeHelper() %> will continue to work in both .NET 3.5 and .NET 4, but 16: // changing the return types to MvcHtmlString has the added benefit that <%: Html.SomeHelper() %> will also work 17: // properly in .NET 4 rather than resulting in a double-encoded output. MVC developers in .NET 4 will then be able 18: // to use the <%: %> syntax almost everywhere instead of having to remember where to use <%= %> and where to use 19: // <%: %>. This should help developers craft more secure web applications by default. 20: // 21: // To create an MvcHtmlString, use the static Create() method instead of calling the protected constructor. The comment said the encoding rule of the <%: %> would be: If the type of the content is IHtmlString it will NOT encode since the IHtmlString indicates that it’s Html-safe. Otherwise it will use HtmlEncode to encode the content. If we check the return type of the Html.TextBox method we will find that it’s MvcHtmlString, which was implemented the IHtmlString interface dynamically. That is the reason why the “<input name="name" id="name" type="text"/>” was not encoded by <%: %>. So if we want to tell ASP.NET MVC, or I should say the ASP.NET runtime that the content is Html-safe and no need, or should not be encoded we can convert the content into IHtmlString. So another resolution would be like this. Also we can create an extension method as well for better developing experience. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace ShaunXu.Blogs.IHtmlStringIssue 8: { 9: public static class Helpers 10: { 11: public static MvcHtmlString IsHtmlSafe(this string content) 12: { 13: return MvcHtmlString.Create(content); 14: } 15: } 16: } Then the view would be like this. And the page rendered correctly.         Summary In this post I explained a bit about the new markup in .NET 4.0 – <%: %> and its usage. I also explained a bit about how to control the page content, whether it should be encoded or not. We can see the ASP.NET MVC gives us more points to control the web pages.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >