Search Results

Search found 76977 results on 3080 pages for 'create function'.

Page 364/3080 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • Why Virtual Box won't give me option to create 64 bits guests?

    - by Eduardo Born
    My host is x64 bits Windows 8.1. I downloaded the latest Virtual Box (4.3) and I'm trying to create a VM with a 64 bits Ubuntu OS (ubuntu-12.04.3-desktop-amd64). When I go to New VM wizard, it doesn't give me option to select "Ubuntu (x64)" as I have seen in other people's screenshots, only just "Ubuntu". As a result, the ISO can't boot. I tried in another PC and Virtual Box gives the x64 variants to most listed OS... Control Panel shows x64 OS, x64 processor. My host laptop is a Sony Vaio VPCZ22UGX/N, Intel® Core™ i7-2640M processor. CPUz shows Vx-t is available on my processor, of course. Here is what I tried so far: I enabled IO APIC as required in the docs. I have virtualization enabled in the BIOS. It works fine in VMware. Check that Hyper-V is not running or even installed on my Windows. Same for VMware. I also tried running the command: VBoxManage modifyvm [vmname] --longmode on for that VM, but no change.. I think the issue is really that I can't select x64 variant of the Ubuntu OS for that VM. Other people seem to indicate that's a requirement, but I don't get that option for some reason. I spent a lot of time and can't find what's wrong... Anyone knows what could be missing here? Thank you very much!! Eduardo

    Read the article

  • How do I create a wifi network bridge with qemu on OS X?

    - by a paid nerd
    I grabbed a small FreeBSD live CD and QEMU, and I'm trying to bridge my Mac OS X 10.8 wifi connection so that the guest OS is available on my LAN. However, the guest OS never gets a DHCP lease. This works perfectly with VirtualBox in their "bridged" network mode, so I know it can be done. I need to get it working with QEMU because VirtualBox doesn't support the architecture that I need for this project. Here's what I've done so far based on hours of googling: Installed TUNTAP for OS X Told OS X to supposedly forward all packets, even ARP: (NOTE: This doesn't appear to work.) $ sudo sysctl -w net.inet.ip.forwarding=1 $ sudo sysctl -w net.link.ether.inet.proxyall=1 $ sudo sysctl -w net.inet.ip.fw.enable=1 Created a bridge: $ sudo ifconfig bridge0 create $ sudo ifconfig bridge0 addm en0 addm tap0 $ sudo ifconfig bridge0 up $ ifconfig bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether ac:de:xx:xx:xx:xx Configuration: priority 0 hellotime 0 fwddelay 0 maxage 0 ipfilter disabled flags 0x2 member: en0 flags=3<LEARNING,DISCOVER> port 4 priority 0 path cost 0 member: tap0 flags=3<LEARNING,DISCOVER> port 8 priority 0 path cost 0 tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether ca:3d:xx:xx:xx:xx open (pid 88244) Started tcpdump with -I in the hopes that it enables promiscuous mode on the wifi device: $ sudo tcpdump -In -i en0 Run QEMU using the bridged network instructions: $ qemu-system-x86_64 -cdrom mfsbsd-9.2-RELEASE-amd64.iso -m 1024 \ -boot d -net nic -net tap,ifname=tap0,script=no,downscript=no But the guest system never gets a DHCP lease: If I tcpdump -ni tap0, I see lots of traffic from the wireless network. But if I tcpdump -ni en0, I don't see any DHCP traffic from the QEMU guest OS. Any ideas? Update 1: I tried sudo defaults write "/Library/Preferences/SystemConfiguration/com.apple.Boot" "Kernel Flags" "net.inet.ip.scopedroute=0" and rebooting per this mailing list suggestion, but this didn't help. In fact, it made VirtualBox bridged mode stop working.

    Read the article

  • What is the best way to create a failover cluster for my IIS website?

    - by ObligatoryMoniker
    Our eCommerce website www.tervis.com currently runs on two servers: SQL server: 2005 x 86 on Windows Server 2003 Standard x86 with a single dual core processor and 4 gb of memeory IIS server: Windows Server 2008 Web edition x64 with dual quad core hyper threaded processors and 32 gb of memory Tervis.com's revenue has steadily grown to the point where we need to have redundant servers deployed with a fail over mechanism so that we do not have any down time. Because the SQL server is so underpowered compared to the web server my thought was to purchase: 2 x SQL Server 2008 R2 web edition x64 single processor license 2 x Windows Server 2008 R2 Web Edition Licenses 1 x New Physical dual quad core 32 GB server 1 x F5 Load Balancer I need the Windows Server 2008 R2 Web Edition licenses so that I can run SQL and IIS on the same box for both of these servers. The thought is to run this as an active/passive fail over cluster that could be upgraded to an active/active cluster if we purchased the additional SQL licensing. The F5 load balancer would serve as the device that monitors the two servers and if the current active one stops responding then fails over to using the other server. To be clear this is not windows clustering but simply using a load balancer to fail over between two computers so that you now have a cluster in the general sense. Is this really the best way to accomplish what I need? Is there some way to leverage the old server 2003 SQL server to function as the devices that funnels http requests to the appropriate active server and then fails over if a problem occurs? Is there any third party clustering software that might help me accomplish this in a simpler fashion?

    Read the article

  • How to clone MySQL DB? Errors with CREATE VIEW/SHOW VIEW privileges

    - by user38071
    Running MySQL 5.0.32 on Debian 4.0 (Etch). I'm trying to clone a WordPress MySQL database completely (structure and data) on the same server. I tried doing a dump to a .sql file and an import into a new empty database from the command line, but the import fails with errors saying the user does not have the "SHOW VIEW" or "CREATE VIEW" privilege. Trying it with PHPMyAdmin doesn't work either. I also tried doing this with the MySQL root user (not named "root" though) and it shows an "Access Denied" error. I'm terribly confused as to where the problem is. Any pointers on cloning a MySQL DB and granting all privileges to a user account would be great (specifically for MySQL 5.0.32). The SHOW GRANTS command for the existing user works (the one who has privileges over the source database). It shows that the user has all privileges granted. I created a new user and database. Here's what I see with the grant commands. $ mysql -A -umyrootaccount --password=myrootaccountpassword mysql> grant all privileges on `newtarget_db`.* to 'newtestuser'@'localhost'; ERROR 1044 (42000): Access denied for user 'myrootaccount'@'localhost' to database 'newtarget_db' mysql> grant all privileges on `newtarget_db`.* to 'existingsourcedbuser'@'localhost'; ERROR 1044 (42000): Access denied for user 'myrootaccount'@'localhost' to database 'newtarget_db'

    Read the article

  • How to create hash or yml from top level attributes values of node?

    - by Sarah Haskins
    I have a chef recipe where I want to take all of the attributes under node['cfn']['environment'] and write them to a yml file. I could do something like this (it works fine): content = { "environment_class" => node['cfn']['environment']['environment_class'], "node_id" => node['cfn']['environment']['node_id'], "reporting_prefix" => node['cfn']['environment']['reporting_prefix'], "cfn_signal_url" => node['cfn']['environment']['signal_url'] } yml_string = YAML::dump(content) file "/etc/configuration/environment/platform.yml" do mode 0644 action :create content "#{yml_string}" end But I don't like that I have to explicitly list out the names of the attributes. If later I add a new attributes it would be nice if it automatically was included in the written out yml file. So I tried something like this: yml_string = node['cfn']['environment'].to_yaml But because the node is actually a Mash, I get a platform.yml file like this (it contains a lot of unexpected nesting that I don't want): --- !ruby/object:Chef::Node::Attribute normal: tags: [] cfn: environment: &25793640 reporting_prefix: Platform2 signal_url: https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/... environment_class: Dev node_id: i-908adf9 ... But what I want is this: ---- reporting_prefix: Platform2 signal_url: https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/... environment_class: Dev node_id: i-908adf9 How can I achieve the desired yml output w/o explicitly listing the attributes by name?

    Read the article

  • How to create a init.d script for openssh-server which was compiled and installed from source using configure + make + make install?

    - by Patrick L
    I have installed openssh-server in my Ubuntu PC using apt-get install openssh-server. The version is 5.9. Now, I would like to compile and install openssh-server version 6.2 from source codes. I have successfully downloaded the source codes, and run the following commands: ./configure make make install I found that the new version of openssh-server was installed into /usr/local/sbin/. The old version of openssh-server is in /usr/sbin/. I found that the service script in /etc/init.d/ssh is still pointing to /usr/sbin/. And the old openssh-server (v5.9) is still running. How can I replace the old openssh-server with the new openssh-server that I have just compiled and installed? How can I create a init.d script to start and stop the new openssh-server that I've compiled from source manually? How to start the new openssh-server on boot? When I install openssh-server using apt-get install, the config files will be installed into /etc/ssh/. If I compile and install it from source, where is the config file? If I compiled openssh-server from source, but I install openssh-client package using apt-get install, will there be any config files conflict? Thanks.

    Read the article

  • How to create a new background process in a KSH "while read" loop?

    - by yael
    The following test script has a problem. When I add the line (sleep 5 ) & in the script then the "while read" loop does not read all lines from the file, but only prints the first line. But when I remove the ( sleep 5 ) & from the script, then the script prints all lines as defined in the file. Why the ( sleep 5 ) & causes this? And how to solve the problem? I want to create a new process (for which the sleep is just an example) in the while loop: $ more test #!/bin/ksh while read -r line ; do echo Read a line: echo $line ( sleep 5 )& RESULT=$! echo Started background sleep with process id $RESULT sleep 1 echo Slept for a second kill $RESULT echo Killed background sleep with process id $RESULT done < file echo Completed On my Linux, when using the following contents of file: $ more file 123 aaa 234 bbb 556 ccc ...running ./test just gives me: Read a line: 123 aaa Started background sleep with process id 4181 Slept for a second Killed background sleep with process id 4181 Completed

    Read the article

  • How can I create a VLAN on my extreme switch for a separate subnet/domain?

    - by drpcken
    I'm putting together a small active directory implementation for a buddy of mine. I currently have 2 servers (one is the primary domain controller) and a couple clients. I need to test and run updates on every machine on this domain, but I would have plug them into my current LIVE domain to get it internet access. From what I've read having two separate domains on a single subnet is a bad idea (even though it is temporary) so I don't want to risk messing anything up on my production domain. I'm pretty sure I can create a separate VLAN on my extreme 48 port switch and plug this smaller domain into it on a different subnet, but I don't know the commands. Both subnets would need internet access of course (one of the things I can't wrap my head around is routing internet traffic between subnets (gateway is on production subnet). Switch is a Summit x450e-48p My production domain is on subnet 192.168.200.0. My new domain I want to put online would go into subnet 192.168.10.0. A shove in the right direction would be greatly appreciated. Thank you!

    Read the article

  • How to create a static IP on Windows Server 2008 R2 so I can access the server remotely

    - by Aesir
    I have just purchased a HP Proliant N40L which I am intending to use as a NAS, learning tool and just in general something to mess around with. As a student via the Microsoft dreamspark program I can get a free copy of Windows Server 2008 R2 which I am using as the OS. So that I can remote to the box from outside of my local network and so that I can stream media from it to my PS3, I have read that I need to create a static IP for the server and use port forwarding to forward to this IP so I can remote in. Is this correct? I am not really sure how to do this and if I need to make these changes on my router configuration, on the OS or both. I am a novice when it comes to networking however most resources for Windows server 2008 R2 seem to assume a fair amount of experience already. I realise that using this particular OS may seem like overkill for what I currently wish to do with it (stream content to other devices and backup) but as I can get a copy for free it seems sensible. Edit: From reading answers posted I feel I should give more information. I have now tried to add a static IP address using my router configuration settings. I have used the getmac command to get the mac address of the server. My ISP is Virgin Media and I have gone to the LAN IP section and I have added an IP address to the DHCP Reservation Lease Info. I can now use remote desktop connection internally to remote to the server (so I am assuming assigning this IP has worked). How do I configure this on the OS as well? I am also unsure on how I would remote to this machine outside of my local network?

    Read the article

  • How can I create a VLAN on my extreme switch for a seperate subnet/domain?

    - by drpcken
    I'm putting together a small active directory implementation for a buddy of mine. I currently have 2 servers (one is the primary domain controller) and a couple clients. I need to test and run updates on every machine on this domain, but I would have plug them into my current LIVE domain to get it internet access. From what I've read having two separate domains on a single subnet is a bad idea (even though it is temporary) so I don't want to risk messing anything up on my production domain. I'm pretty sure I can create a separate VLAN on my extreme 48 port switch and plug this smaller domain into it on a different subnet, but I don't know the commands. Both subnets would need internet access of course (one of the things I can't wrap my head around is routing internet traffic between subnets (gateway is on production subnet). My production domain is on subnet 192.168.200.0. My new domain I want to put online would go into subnet 192.168.10.0. A shove in the right direction would be greatly appreciated. Thank you!

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • Best way to create and restore a Drive Image with Windows 7? [closed]

    - by jasondavis
    Possible Duplicate: Want to create a system image I am about to build a new PC. I am a windows 7 user. For years now I have been wanting to install windows and all my favorite software, music, etc., and then make a drive IMAGE and be able to go in 6 months later or WHENEVER I want to start fresh and completely format my drives and restore my IMAGE and have all my settings, programs, etc be just5 like when I created the original image. I know there is many ways to do this but I have never done this 100% successfully and I have about a week to figure out how to do it perfectly for when I build my new PC. I have heard good things about using tried Acronis true image in the PAST for doing what I describe4, I tried using it but, but the newer versions are overly complex and don't even seem to work the way I hoped. I also see that Windows 7 has some sort of drive IMAGE creator itself as well. Does the newer Windows 7 image creator do what I am describing above? If it does do what I am asking for (complete drive image with windows, all programs and settings) saved to an IMAGE file that can easily be restored to ANY hard drive in the future? Please share your experiences, tips, ideas on how to achieve this the easiest and most reliable way please

    Read the article

  • How can I create a separate toolbar from the Task Bar?

    - by Iszi
    In Windows XP, you could separate toolbars from the Task Bar by dragging them to the desktop. They could then be left lying about anywhere on your screen or, my preferred option, docked to any side of the screen. I found this particularly useful to keep a handy list of common phone numbers quickly accessible. I'd create a new toolbar pointing to a custom folder, and put a bunch of dead shortcuts in the folder that had names and numbers as their file names. I'd then dock the toolbar to the left side, set it to auto-hide and always on top (options which could be set separate from the Task Bar as well) and it would be readily available no matter what else I was doing on my system. However, on my Windows 7 system, I seem unable to perform the crucial step of pulling the new toolbar off of the Task Bar. This is of course with the Task Bar "unlocked" so that I can move all my toolbars around. Is there something I'm missing here, or is this a feature that's been disabled in Windows 7? Is there any way to re-enable it, or otherwise achieve similar functionality? I'd rather be able to do this without additional software, if possible.

    Read the article

  • Is it possible to to create a live linux iso containing a win xp virtual machine?

    - by mark
    I would like to have a Linux live system that contains a Windows xp virtual machine. This would be run from a bootable USB flash drive. My attempts so far have been unsuccessful. I created a Lubuntu 12.04 virtual machine with VMware. I updated and configured it to my needs, and installed Virtualbox. I then created a Windows xp vm with Virtualbox in the Lubuntu vm. I tested everything and everything worked, including USB devices. I installed Remastersys in the Lubuntu vm, copied the xp vm folder to the /etc/skel folder then created the custom iso with remastersys. I burned the iso and tested it on a laptop. It worked flawlessly. All programs and wireless networking worked. My problem was the xp vm. Virtualbox started fine but would not run the vm. I have the following error: Result Code: NS_ERROR_FAILURE (0x80004005) Component: VirtualBox Interface: IVirtualBox {c28be65f-1a8f-43b4-81f1-eb60cb516e66}. I ran remastersys again changing the permissions on the skel folder to R W for everyone. I also logged into Lubuntu as root and ran remastersys again. Each iso I created worked fine but would not start the xp vm inside. The last attempt virtualbox gave me an access error stating it can not access the virtual disk. Is what I want to do possible? In theory I don't see why it would not work. Is it a permissions issue? Should I create the iso then add the xp vm after by editing the iso by hand? Using a vm and not real hardware as a build machine a problem? Any ideas? keep any responses in laymens terms. I am still a Linux novice.

    Read the article

  • What is the bash syntax to create a new directory in the directory above?

    - by mozerella
    I aim to make a script for mogrify. The mogrify command will resize images in a directory and put the resized images into a directory on the same directory level, with the same name as the work directory, but with a suffix (_a). The new directory will be moved to another collection later on. Something like this, #!/bin/bash mkdir ../n_a for file in *{.JPG|.jpg}; do mogrify -path ../n_a -resize 1200x1200 -quality 96;done I'm guessing ../ denotes the parent dir when working in a child directory, but I need help here. Edit: "n" needs to be replaced with the syntax for the working directory name. Sorry there was a typo as well third script line, should have read n not x Edit2: This script does exactly what I need and it's silent. #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p $DEST mogrify -path $DEST -resize 1200x1200 -quality 96 *.jpg *.JPG thanks to vgoff for the correct PWD syntax and cesareriva http://www.cesareriva.com/archives/722 for showing me the DEST function. Something else: ${PWD##*/}_a is not caring for spaces in the directory name and the script fails. An empty dir is created in the same dir as the images. Found it out now, it needs quotations on the $DEST too, presumably to help mkdir create the dir with a space in the name, and mogrify to write the files to the right place, like this #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p "$DEST" mogrify -path "$DEST" -resize 1200x1200 -quality 96 *.jpg *.JPG

    Read the article

  • Custom attributes in Active Directory - determining usage/function and possible removal options?

    - by HopelessN00b
    I've bumped into a highly-customized Active Directory environment (2003 FL) that's got me wondering if there's any particularly easy way to figure out what a custom attribute's function is, and what, if anything, is "using" that particular attribute. And then what some good options for potentially removing custom attributes from the schema might be. Aside from a restore or starting from scratch. If such an option exists. For example, I think I can be fairly certain what the "isDumbass" attribute with a value of TRUE means, but not so much with "IRPextCONST", containing a value of 393684. Likewise, I'd think it should be pretty safe to delete the "isDumbass" attribute, but would like to a) be sure and b) find out what's querying or updating that value anyway, because I suspect that anything using that attribute might be next on the list of things to remove. Ideally, without having to run a search on the contents of every custom script and bit of source code I can get my hands on, of course. And finally, aside from rebuilding from scratch, or doing an authoritative AD restore from backups that don't exist... is there a way to delete a given custom attribute? (Not blank the value, but actually delete the attribute from the schema - some folks would rather not have attributes like "FaggotMeter" and "DouchebagCounter" hanging around.) I've been able to find and successfully test a method on Windows 2k, but it seems like Microsoft disabled this option in SP4, and the domain in question is a 2003 functional level.

    Read the article

  • Create new vms with a template with a csv. Possible?

    - by EdConde
    I am new to Powershell and Powercli... but i manager few ESX environments and really would like to do as much as possible via powershell. I am trying to do as much as i can via Powershell. On with the help I need: I used this one liner to create VMs from templates. But the problem is there has to be some user input after each new VM is created. New-VM name -Template template -VMHost VMHost -Datastore Datastore What i would like to do is be able to import via CSV the name of the new vm, the template to use, the host to put the new vm and the datastore all from a CSV. I don't know if it is as easy as below, but i kept getting errors. Import-Csv "C:\powershell\Data\VM2Create.csv" | Foreach-object{ New-VM $.name -Template $.template -VMHost $.VMHost -Datastore $.Datastore} I know there some () or {} or possibly | that need... just don't know where to put them... The csv i think would look like this: name, template, vmhost, datastore Any help or thoughts would be much appreciated...

    Read the article

  • How can I extend / create a new partition from the following setup?

    - by Kiada
    I'm a little unsure what to do in this situation. When I try to create a new simple volume from the unallocated space I get an error because I already have 4 partitions. I have no option to extend either my C:\ primary partition or the E:\ logical drive. C:\ - Gaming Win7 install. D:\ - Storage Unallocated Space - Would somehow like to install OSX on a partition from this space. E:\ - Software Development Win7 install. I:\ - Ignore this. It's an external 1TB HDD. Do I have any options that do not involve formatting / losing information on either C:\ or E:\? Thank you. Link to visual disk partitioning setup image. Edit: A bit more information regarding partitions. Firstly, the image linked above is a screenshot of Windows 7 partitioning tool, easier to read than text I guess! H:\ System Reserved: 100MB NTFS C:\ 244 GB NTFS Healthy (Page File, Primary Partition) D:\ 294 GB NTFS Healthy (Primary Partition) E:\ 100 GB NTFS Healthy (Boot, Page File, Crash Dump, Logical Drive) Unallocated 292 GB Hope this helps :)

    Read the article

  • Converting a generic list into JSON string and then handling it in java script

    - by Jalpesh P. Vadgama
    We all know that JSON (JavaScript Object Notification) is very useful in case of manipulating string on client side with java script and its performance is very good over browsers so let’s create a simple example where convert a Generic List then we will convert this list into JSON string and then we will call this web service from java script and will handle in java script. To do this we need a info class(Type) and for that class we are going to create generic list. Here is code for that I have created simple class with two properties UserId and UserName public class UserInfo { public int UserId { get; set; } public string UserName { get; set; } } Now Let’s create a web service and web method will create a class and then we will convert this with in JSON string with JavaScriptSerializer class. Here is web service class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; namespace Experiment.WebService { /// <summary> /// Summary description for WsApplicationUser /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. [System.Web.Script.Services.ScriptService] public class WsApplicationUser : System.Web.Services.WebService { [WebMethod] public string GetUserList() { List<UserInfo> userList = new List<UserInfo>(); for (int i = 1; i <= 5; i++) { UserInfo userInfo = new UserInfo(); userInfo.UserId = i; userInfo.UserName = string.Format("{0}{1}", "J", i.ToString()); userList.Add(userInfo); } System.Web.Script.Serialization.JavaScriptSerializer jSearializer = new System.Web.Script.Serialization.JavaScriptSerializer(); return jSearializer.Serialize(userList); } } } Note: Here you must have this attribute here in web service class ‘[System.Web.Script.Services.ScriptService]’ as this attribute will enable web service to call from client side. Now we have created a web service class let’s create a java script function ‘GetUserList’ which will call web service from JavaScript like following function GetUserList() { Experiment.WebService.WsApplicationUser.GetUserList(ReuqestCompleteCallback, RequestFailedCallback); } After as you can see we have inserted two call back function ReuqestCompleteCallback and RequestFailedCallback which handle errors and result from web service. ReuqestCompleteCallback will handle result of web service and if and error comes then RequestFailedCallback will print the error. Following is code for both function. function ReuqestCompleteCallback(result) { result = eval(result); var divResult = document.getElementById("divUserList"); CreateUserListTable(result); } function RequestFailedCallback(error) { var stackTrace = error.get_stackTrace(); var message = error.get_message(); var statusCode = error.get_statusCode(); var exceptionType = error.get_exceptionType(); var timedout = error.get_timedOut(); // Display the error. var divResult = document.getElementById("divUserList"); divResult.innerHTML = "Stack Trace: " + stackTrace + "<br/>" + "Service Error: " + message + "<br/>" + "Status Code: " + statusCode + "<br/>" + "Exception Type: " + exceptionType + "<br/>" + "Timedout: " + timedout; } Here in above there is a function called you can see that we have use ‘eval’ function which parse string in enumerable form. Then we are calling a function call ‘CreateUserListTable’ which will create a table string and paste string in the a div. Here is code for that function. function CreateUserListTable(userList) { var tablestring = '<table ><tr><td>UsreID</td><td>UserName</td></tr>'; for (var i = 0, len = userList.length; i < len; ++i) { tablestring=tablestring + "<tr>"; tablestring=tablestring + "<td>" + userList[i].UserId + "</td>"; tablestring=tablestring + "<td>" + userList[i].UserName + "</td>"; tablestring=tablestring + "</tr>"; } tablestring = tablestring + "</table>"; var divResult = document.getElementById("divUserList"); divResult.innerHTML = tablestring; } Now let’s create div which will have all html that is generated from this function. Here is code of my web page. We also need to add a script reference to enable web service from client side. Here is all HTML code we have. <form id="form1" runat="server"> <asp:ScriptManager ID="myScirptManger" runat="Server"> <Services> <asp:ServiceReference Path="~/WebService/WsApplicationUser.asmx" /> </Services> </asp:ScriptManager> <div id="divUserList"> </div> </form> Now as we have not defined where we are going to call ‘GetUserList’ function so let’s call this function on windows onload event of javascript like following. window.onload=GetUserList(); That’s it. Now let’s run it on browser to see whether it’s work or not and here is the output in browser as expected. That’s it. This was very basic example but you can crate your own JavaScript enabled grid from this and you can see possibilities are unlimited here. Stay tuned for more.. Happy programming.. Technorati Tags: JSON,Javascript,ASP.NET,WebService

    Read the article

  • DTracing a PHPUnit Test: Looking at Functional Programming

    - by cj
    Here's a quick example of using DTrace Dynamic Tracing to work out what a PHP code base does. I was reading the article Functional Programming in PHP by Patkos Csaba and wondering how efficient this stype of programming is. I thought this would be a good time to fire up DTrace and see what is going on. Since DTrace is "always available" even in production machines (once PHP is compiled with --enable-dtrace), this was easy to do. I have Oracle Linux with the UEK3 kernel and PHP 5.5 with DTrace static probes enabled, as described in DTrace PHP Using Oracle Linux 'playground' Pre-Built Packages I installed the Functional Programming sample code and Sebastian Bergmann's PHPUnit. Although PHPUnit is included in the Functional Programming example, I found it easier to separately download and use its phar file: cd ~/Desktop wget -O master.zip https://github.com/tutsplus/functional-programming-in-php/archive/master.zip wget https://phar.phpunit.de/phpunit.phar unzip master.zip I created a DTrace D script functree.d: #pragma D option quiet self int indent; BEGIN { topfunc = $1; } php$target:::function-entry /copyinstr(arg0) == topfunc/ { self->follow = 1; } php$target:::function-entry /self->follow/ { self->indent += 2; printf("%*s %s%s%s\n", self->indent, "->", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); } php$target:::function-return /self->follow/ { printf("%*s %s%s%s\n", self->indent, "<-", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); self->indent -= 2; } php$target:::function-return /copyinstr(arg0) == topfunc/ { self->follow = 0; } This prints a PHP script function call tree starting from a given PHP function name. This name is passed as a parameter to DTrace, and assigned to the variable topfunc when the DTrace script starts. With this D script, choose a PHP function that isn't recursive, or modify the script to set self->follow = 0 only when all calls to that function have unwound. From looking at the sample FunSets.php code and its PHPUnit test driver FunSetsTest.php, I settled on one test function to trace: function testUnionContainsAllElements() { ... } I invoked DTrace to trace function calls invoked by this test with # dtrace -s ./functree.d -c 'php phpunit.phar \ /home/cjones/Desktop/functional-programming-in-php-master/FunSets/Tests/FunSetsTest.php' \ '"testUnionContainsAllElements"' The core of this command is a call to PHP to run PHPUnit on the FunSetsTest.php script. Outside that, DTrace is called and the PID of PHP is passed to the D script $target variable so the probes fire just for this invocation of PHP. Note the quoting around the PHP function name passed to DTrace. The parameter must have double quotes included so DTrace knows it is a string. The output is: PHPUnit 3.7.28 by Sebastian Bergmann. ......-> FunSetsTest::testUnionContainsAllElements -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::union <- FunSets::union -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertFalse -> PHPUnit_Framework_Assert::isFalse -> {closure} -> main <- main <- {closure} <- PHPUnit_Framework_Assert::isFalse -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertFalse <- FunSetsTest::testUnionContainsAllElements ... Time: 1.85 seconds, Memory: 3.75Mb OK (9 tests, 23 assertions) The periods correspond to the successful tests before and after (and from) the test I was tracing. You can see the function entry ("->") and return ("<-") points. Cross checking with the testUnionContainsAllElements() source code confirms the two singletonSet() calls, one union() call, two assertTrue() calls and finally an assertFalse() call. These assertions have a contains() call as a parameter, so contains() is called before the PHPUnit assertion functions are run. You can see contains() being called recursively, and how the closures are invoked. If you want to focus on the application logic and suppress the PHPUnit function trace, you could turn off tracing when assertions are being checked by adding D clauses checking the entry and exit of assertFalse() and assertTrue(). But if you want to see all of PHPUnit's code flow, you can modify the functree.d code that sets and unsets self-follow, and instead change it to toggle the variable in request-startup and request-shutdown probes: php$target:::request-startup { self->follow = 1 } php$target:::request-shutdown { self->follow = 0 } Be prepared for a large amount of output!

    Read the article

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • Creating a Document Library with Content Type in code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/15/154360.aspxIn the past, I have shown how to create a list content type and add the content type to a list in code.  As a Developer, many of the artifacts which we create are widgets which have a List or Document Library as the back end.   We need to be able to create our applications (Web Part, etc.) without having the user involved except to enter the list item data.  Today, I will show you how to do the same with a document library.    A summary of what we will do is as follows:   1.   Create an Empty SharePoint Project in Visual Studio 2.   Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution 3.   Create a new Feature and add and event receiver  all the code will be in the event receiver 4.   Add the fields which will extend the built-in Document content type 5.   If the Content Type does not exist, Create it 6.   If the Document Library does not exist, Create it with the new Content Type inherited from the Document Content Type 7.   Delete the Document Content Type from the Library (as we have a new one which inherited from it) 8.   Add the fields which we want to be visible from the fields added to the new Content Type   Here we go:   Create an Empty SharePoint Project in Visual Studio      Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution       The Utilities and Extensions Library will be part of this project which I will provide a download link at the end of this post.  Drag and drop them into your project.  If Dragged and Dropped from windows explorer you will need to show all files and then include them in your project.  Change the Namespace to agree with your project.   Create a new Feature and add and event receiver  all the code will be in the event receiver.  Here We added a new Feature called “CreateDocLib”  and then right click to add an Event Receiver All of our code will be in this Event Receiver.  For this Demo I will only be using the Feature Activated Event.      From this point on we will be looking at code!    We are adding two constants for use columGroup (How we want SharePoint to Group them, usually Company Name) and ctName(ContentType Name)  using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; namespace CreateDocLib.Features.CreateDocLib { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("56e6897c-97c4-41ac-bc5b-5cd2c04f2dd1")] public class CreateDocLibEventReceiver : SPFeatureReceiver { const string columnGroup = "DJ"; const string ctName = "DJDocLib"; } }     Here we are creating the Feature Activated event.   Adding the new fields (Site Columns) ,  Testing if the Content Type Exists, if not adding it.  Testing if the document Library exists, if not adding it.   #region DocLib public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { //add the fields addFields(spWeb); //add content type SPContentType testCT = spWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(spWeb, ctName); } if ((spWeb.Lists.TryGetList("MyDocuments") == null)) { //create the list if it dosen't to exist CreateDocLib(spWeb); } } } #endregion The addFields method uses the utilities library to add site columns to the site. We can add as many fields within this method as we like. Here we are adding one for demonstration purposes. Icon as a Url type.  public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Icon", SPFieldType.URL, false, columnGroup); }The addContentType method add the new Content Type to the site Content Types. We have already checked to see that it does not exist. In addition, here is where we add the linkages from our site columns previously created to our new Content Type   private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Document"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } Here we are adding just one linkage as we only have one additional field in our Content Type public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Icon", ct); } Next we add the logic to create our new Document Library, which we have already checked to see if it exists.  We create the document library and turn on content types.  Add the new content type and then delete the old “Document” content types.   private void CreateDocLib(SPWeb web) { using (var site = new SPSite(web.Url)) { var web1 = site.RootWeb; var listId = web1.Lists.Add("MyDocuments", string.Empty, SPListTemplateType.DocumentLibrary); var lib = web1.Lists[listId] as SPDocumentLibrary; lib.ContentTypesEnabled = true; var docType = web.ContentTypes[ctName]; lib.ContentTypes.Add(docType); lib.ContentTypes.Delete(lib.ContentTypes["Document"].Id); lib.Update(); AddLibrarySettings(web1, lib); } }  Finally, we set some document library settings on our new document library with the AddLibrarySettings method. We then ensure that the new site column is visible when viewed in the browser.  private void AddLibrarySettings(SPWeb web, SPDocumentLibrary lib) { lib.OnQuickLaunch = true; lib.ForceCheckout = true; lib.EnableVersioning = true; lib.MajorVersionLimit = 5; lib.EnableMinorVersions = true; lib.MajorWithMinorVersionsLimit = 5; lib.Update(); var view = lib.DefaultView; view.ViewFields.Add("Icon"); view.Update(); } Okay, what's cool here: In a few lines of code, we have created site columns, A content Type, a document library. As a developer, I use this functionality all the time. For instance, I could now just add a web part to this same solutionwhich uses this document Library. I love SharePoint! Here is the complete solution: Create Document Library Code

    Read the article

  • Cancelling Route Navigation in AngularJS Controllers

    - by dwahlin
    If you’re new to AngularJS check out my AngularJS in 60-ish Minutes video tutorial or download the free eBook. Also check out The AngularJS Magazine for up-to-date information on using AngularJS to build Single Page Applications (SPAs). Routing provides a nice way to associate views with controllers in AngularJS using a minimal amount of code. While a user is normally able to navigate directly to a specific route, there may be times when a user triggers a route change before they’ve finalized an important action such as saving data. In these types of situations you may want to cancel the route navigation and ask the user if they’d like to finish what they were doing so that their data isn’t lost. In this post I’ll talk about a technique that can be used to accomplish this type of routing task.   The $locationChangeStart Event When route navigation occurs in an AngularJS application a few events are raised. One is named $locationChangeStart and the other is named $routeChangeStart (there are other events as well). At the current time (version 1.2) the $routeChangeStart doesn’t provide a way to cancel route navigation, however, the $locationChangeStart event can be used to cancel navigation. If you dig into the AngularJS core script you’ll find the following code that shows how the $locationChangeStart event is raised as the $browser object’s onUrlChange() function is invoked:   $browser.onUrlChange(function (newUrl) { if ($location.absUrl() != newUrl) { if ($rootScope.$broadcast('$locationChangeStart', newUrl, $location.absUrl()).defaultPrevented) { $browser.url($location.absUrl()); return; } $rootScope.$evalAsync(function () { var oldUrl = $location.absUrl(); $location.$$parse(newUrl); afterLocationChange(oldUrl); }); if (!$rootScope.$$phase) $rootScope.$digest(); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The key part of the code is the call to $broadcast. This call broadcasts the $locationChangeStart event to all child scopes so that they can be notified before a location change is made. To handle the $locationChangeStart event you can use the $rootScope.on() function. For this example I’ve added a call to $on() into a function that is called immediately after the controller is invoked:   function init() { //initialize data here.. //Make sure they're warned if they made a change but didn't save it //Call to $on returns a "deregistration" function that can be called to //remove the listener (see routeChange() for an example of using it) onRouteChangeOff = $rootScope.$on('$locationChangeStart', routeChange); } This code listens for the $locationChangeStart event and calls routeChange() when it occurs. The value returned from calling $on is a “deregistration” function that can be called to detach from the event. In this case the deregistration function is named onRouteChangeOff (it’s accessible throughout the controller). You’ll see how the onRouteChangeOff function is used in just a moment.   Cancelling Route Navigation The routeChange() callback triggered by the $locationChangeStart event displays a modal dialog similar to the following to prompt the user:     Here’s the code for routeChange(): function routeChange(event, newUrl) { //Navigate to newUrl if the form isn't dirty if (!$scope.editForm.$dirty) return; var modalOptions = { closeButtonText: 'Cancel', actionButtonText: 'Ignore Changes', headerText: 'Unsaved Changes', bodyText: 'You have unsaved changes. Leave the page?' }; modalService.showModal({}, modalOptions).then(function (result) { if (result === 'ok') { onRouteChangeOff(); //Stop listening for location changes $location.path(newUrl); //Go to page they're interested in } }); //prevent navigation by default since we'll handle it //once the user selects a dialog option event.preventDefault(); return; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Looking at the parameters of routeChange() you can see that it accepts an event object and the new route that the user is trying to navigate to. The event object is used to prevent navigation since we need to prompt the user before leaving the current view. Notice the call to event.preventDefault() at the end of the function. The modal dialog is shown by calling modalService.showModal() (see my previous post for more information about the custom modalService that acts as a wrapper around Angular UI Bootstrap’s $modal service). If the user selects “Ignore Changes” then their changes will be discarded and the application will navigate to the route they intended to go to originally. This is done by first detaching from the $locationChangeStart event by calling onRouteChangeOff() (recall that this is the function returned from the call to $on()) so that we don’t get stuck in a never ending cycle where the dialog continues to display when they click the “Ignore Changes” button. A call is then made to $location.path(newUrl) to handle navigating to the target view. If the user cancels the operation they’ll stay on the current view. Conclusion The key to canceling routes is understanding how to work with the $locationChangeStart event and cancelling it so that route navigation doesn’t occur. I’m hoping that in the future the same type of task can be done using the $routeChangeStart event but for now this code gets the job done. You can see this code in action in the Customer Manager application available on Github (specifically the customerEdit view). Learn more about the application here.

    Read the article

  • jQuery and Windows Azure

    - by Stephen Walther
    The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail. Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records. There are five steps that you must complete to host the jQuery application: Sign up for Windows Azure Create a Hosted Service Install the Windows Azure Tools for Visual Studio Create a Windows Azure Cloud Service Deploy the Cloud Service Sign Up for Windows Azure Go to http://www.microsoft.com/windowsazure/ and click the Sign up Now button. Select one of the offers. I selected the Introductory Special offer because it is free and I just wanted to experiment with Windows Azure for the purposes of this blog entry.     To sign up, you will need a Windows Live ID and you will need to enter a credit card number. After you finish the sign up process, you will receive an email that explains how to activate your account. Accessing the Developer Portal After you create your account and your account is activated, you can access the Windows Azure developer portal by visiting the following URL: http://windows.azure.com/ When you first visit the developer portal, you will see the one project that you created when you set up your Windows Azure account (In a fit of creativity, I named my project StephenWalther).     Creating a New Windows Azure Hosted Service Before you can host an application in the cloud, you must first add a hosted service to your project. Click your project on the summary page and click the New Service link. You are presented with the option of creating either a new Storage Account or a new Hosted Services.     Because we have code that we want to run in the cloud – the WCF Service -- we want to select the Hosted Services option. After you select this option, you must provide a name and description for your service. This information is used on the developer portal so you can distinguish your services.     When you create a new hosted service, you must enter a unique name for your service (I selected jQueryApp) and you must select a region for this service (I selected Anywhere US). Click the Create button to create the new hosted service.   Install the Windows Azure Tools for Visual Studio We’ll use Visual Studio to create our jQuery project. Before you can use Visual Studio with Windows Azure, you must first install the Windows Azure Tools for Visual Studio. Go to http://www.microsoft.com/windowsazure/ and click the Get Tools and SDK button. The Windows Azure Tools for Visual Studio works with both Visual Studio 2008 and Visual Studio 2010.   Installation of the Windows Azure Tools for Visual Studio is painless. You just need to check some agreement checkboxes and click the Next button a few times and installation will begin:   Creating a Windows Azure Application After you install the Windows Azure Tools for Visual Studio, you can choose to create a Windows Azure Cloud Service by selecting the menu option File, New Project and selecting the Windows Azure Cloud Service project template. I named my new Cloud Service with the name jQueryApp.     Next, you need to select the type of Cloud Service project that you want to create from the New Cloud Service Project dialog.   I selected the C# ASP.NET Web Role option. Alternatively, I could have picked the ASP.NET MVC 2 Web Role option if I wanted to use jQuery with ASP.NET MVC or even the CGI Web Role option if I wanted to use jQuery with PHP. After you complete these steps, you end up with two projects in your Visual Studio solution. The project named WebRole1 represents your ASP.NET application and we will use this project to create our jQuery application. Creating the jQuery Application in the Cloud We are now ready to create the jQuery application. We’ll create a super simple application that displays a list of records retrieved from a WCF service (hosted in the cloud). Create a new page in the WebRole1 project named Default.htm and add the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> var products = [ {name:"Milk", price:4.55}, {name:"Yogurt", price:2.99}, {name:"Steak", price:23.44} ]; $("#productTemplate").render(products).appendTo("#productContainer"); </script> </body> </html> The jQuery code in this page simply displays a list of products by using a template. I am using a jQuery template to format each product. You can learn more about using jQuery templates by reading the following blog entry by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2010/05/07/jquery-templates-and-data-linking-and-microsoft-contributing-to-jquery.aspx You can test whether the Default.htm page is working correctly by running your application (hit the F5 key). The first time that you run your application, a database is set up on your local machine to simulate cloud storage. You will see the following dialog: If the Default.htm page works as expected, you should see the list of three products: Adding an Ajax-Enabled WCF Service In the previous section, we created a simple jQuery application that displays an array by using a template. The application is a little too simple because the data is static. In this section, we’ll modify the page so that the data is retrieved from a WCF service instead of an array. First, we need to add a new Ajax-enabled WCF Service to the WebRole1 project. Select the menu option Project, Add New Item and select the Ajax-enabled WCF Service project item. Name the new service ProductService.svc. Modify the service so that it returns a static collection of products. The final code for the ProductService.svc should look like this: using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Activation; namespace WebRole1 { public class Product { public string name { get; set; } public decimal price { get; set; } } [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService { [OperationContract] public IList<Product> SelectProducts() { var products = new List<Product>(); products.Add(new Product {name="Milk", price=4.55m} ); products.Add(new Product { name = "Yogurt", price = 2.99m }); products.Add(new Product { name = "Steak", price = 23.44m }); return products; } } }   In real life, you would want to retrieve the list of products from storage instead of a static array. We are being lazy here. Next you need to modify the Default.htm page to use the ProductService.svc. The jQuery script in the following updated Default.htm page makes an Ajax call to the WCF service. The data retrieved from the ProductService.svc is displayed in the client template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> $.post("ProductService.svc/SelectProducts", function (results) { var products = results["d"]; $("#productTemplate").render(products).appendTo("#productContainer"); }); </script> </body> </html>   Deploying the jQuery Application to the Cloud Now that we have created our jQuery application, we are ready to deploy our application to the cloud so that the whole world can use it. Right-click your jQueryApp project in the Solution Explorer window and select the Publish menu option. When you select publish, your application and your application configuration information is packaged up into two files named jQueryApp.cspkg and ServiceConfiguration.cscfg. Visual Studio opens the directory that contains the two files. In order to deploy these files to the Windows Azure cloud, you must upload these files yourself. Return to the Windows Azure Developers Portal at the following address: http://windows.azure.com/ Select your project and select the jQueryApp service. You will see a mysterious cube. Click the Deploy button to upload your application.   Next, you need to browse to the location on your hard drive where the jQueryApp project was published and select both the packaged application and the packaged application configuration file. Supply the deployment with a name and click the Deploy button.     While your application is in the process of being deployed, you can view a progress bar.     Running the jQuery Application in the Cloud Finally, you can run your jQuery application in the cloud by clicking the Run button.   It might take several minutes for your application to initialize (go grab a coffee). After WebRole1 finishes initializing, you can navigate to the following URL to view your live jQuery application in the cloud: http://jqueryapp.cloudapp.net/default.htm The page is hosted on the Windows Azure cloud and the WCF service executes every time that you request the page to retrieve the list of products. Summary Because we started from scratch, we needed to complete several steps to create and deploy our jQuery application to the Windows Azure cloud. We needed to create a Windows Azure account, create a hosted service, install the Windows Azure Tools for Visual Studio, create the jQuery application, and deploy it to the cloud. Now that we have finished this process once, modifying our existing cloud application or creating a new cloud application is easy. jQuery and Windows Azure work nicely together. We can take advantage of jQuery to build applications that run in the browser and we can take advantage of Windows Azure to host the backend services required by our jQuery application. The big benefit of Windows Azure is that it enables us to scale. If, all of the sudden, our jQuery application explodes in popularity, Windows Azure enables us to easily scale up to meet the demand. We can handle anything that the Internet might throw at us.

    Read the article

  • Mutation Problem - Clojure

    - by Silanglaya Valerio
    having trouble changing an element of my function represented as a list. code for random function: (defn makerandomtree-10 [pc maxdepth maxwidth fpx ppx] (if-let [output (if (and (< (rand) fpx) (> maxdepth 0)) (let [head (nth operations (rand-int (count operations))) children (doall (loop[function (list) width maxwidth] (if (pos? width) (recur (concat function (list (makerandomtree-10 pc (dec maxdepth) (+ 2 (rand-int (- maxwidth 1))) fpx ppx))) (dec width)) function)))] (concat (list head) children)) (if (and (< (rand) ppx) (>= pc 0)) (nth parameters (rand-int (count parameters))) (rand-int 100)))] output )) I will provide also a mutation function, which is still not good enough. I need to be able to eval my statement, so the following is still insufficient. (defn mutate-5 "chooses a node changes that" [function pc maxwidth pchange] (if (< (rand) pchange) (let [output (makerandomtree-10 pc 3 maxwidth 0.5 0.6)] (if (seq? output) output (list output))) ;mutate the children of root ;declare an empty accumulator list, with root as its head (let [head (list (first function)) children (loop [acc(list) walker (next function)] (println "----------") (println walker) (println "-----ACC-----") (println acc) (if (not walker) acc (if (or (seq? (first function)) (contains? (set operations) (first function))) (recur (concat acc (mutate-5 walker pc maxwidth pchange)) (next walker)) (if (< (rand) pchange) (if (some (set parameters) walker) (recur (concat acc (list (nth parameters (rand-int (count parameters))))) (if (seq? walker) (next walker) nil)) (recur (concat acc (list (rand-int 100))) (if (seq? walker) (next walker) nil))) (recur acc (if (seq? walker) (next walker) nil)))) ))] (concat head (list children))))) (side note: do you have any links/books for learning clojure?)

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >