Search Results

Search found 51418 results on 2057 pages for 'team system'.

Page 38/2057 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Why does HP Update at remote system trigger RDP printing at local system?

    - by lcbrevard
    This is obscure. When connected with RDP to another system that has HP Update installed on it, either directly running the HP Update or having the notification pop up to ask if you want to run HP Update causes the local system to try to print something to peculiarly-chosen-local-printer. Case 1: Desktop Win 7 Ult system RDP connected to HP Laptop Win 7 Ult system. When HP Update runs on the laptop a dialog for XPS Writer Save As... appears on Desktop system. Even if you put in a name, nothing gets generated and the dialog repeats. And repeats. Until you (a) close the RDP connection and (b) clean out the queued entries. If the HP Update pops up the request to run the update and you are not at the desk when this happens, there can be dozens of queued requests for this bogus printing. NOTE: the XPS Writer is not selected as a default printer on either system. Case 2: (Different) HP Laptop Win 7 Ult system RDP connected to XP Pro "brand X" desktop system but with HP printer drivers installed. If the request to run HP Update notification pops on the XP system, dozens of attempts to print, in this case to a Versa Check Printer driver, are queued. Dismissing the HP request, closing RDP, and cleaning out the queue are required to stop this. NOTE: the Versa Check Writer is not selected as a default printer on either system. THE QUESTION: What the heck is going on here? Some kind of scripting or COM activity that is misdirected?

    Read the article

  • Use System Restore to rescue lost user profile in Win XP?

    - by im_chc
    Hi! My win XP account profile has recently been "reset". Many app settings are lost. For example, the "recent project" list in VS 2005 is empty. There should be lots of other stuffs that are painfully lost without me knowing! What can I do? Can I retrieve the app settings from System Restore? I don't have much confidence on this util, even tho I think restoring to a point when the profile still works, and back up away the C:\Documents and Settings (is it where all the app setting files are located?), that should work... Is it reliable to restore to a previous restore pt and then goes back to the latest RP? I've googled on System Restore, looks like what the util does is just back up some physical files, and restore them when doing System Restore. That sounds quite safe, but I am still uncomfortable to this. Thx for u guys' help in advance!

    Read the article

  • Role of systems in entity systems architecture

    - by bio595
    I've been reading a lot about entity components and systems and have thought that the idea of an entity just being an ID is quite interesting. However I don't know how this completely works with the components aspect or the systems aspect. A component is just a data object managed by some relevant system. A collision system uses some BoundsComponent together with a spatial data structure to determine if collisions have happened. All good so far, but what if multiple systems need access to the same component? Where should the data live? An input system could modify an entities BoundsComponent, but the physics system(s) need access to the same component as does some rendering system. Also, how are entities constructed? One of the advantages I've read so much about is flexibility in entity construction. Are systems intrinsically tied to a component? If I want to introduce some new component, do I also have to introduce a new system or modify an existing one? Another thing that I've read often is that the 'type' of an entity is inferred by what components it has. If my entity is just an id how can I know that my robot entity needs to be moved or rendered and thus modified by some system? Sorry for the long post (or at least it seems so from my phone screen)!

    Read the article

  • 8 Mac System Features You Can Access in Recovery Mode

    - by Chris Hoffman
    A Mac’s Recovery Mode is for more than just reinstalling Mac OS X. You’ll find many other useful troubleshooting utilities here — you can use these even if your Mac can’t boot normally. To access Recovery Mode, restart your Mac and press and hold the Command + R keys during the boot-up process. This is one of several hidden startup options on a Mac. Reinstall Mac OS X Most people know Recovery Mode as the place you go to reinstall OS X on your Mac. Recovery Mode will download the OS X installer files from teh Intenret if you don’t have them locally, so they don’t take up space on your disk and you’ll never have to hunt for an opearign system disc. Better yet, it will download up-to-date installation files so you don’t have to spend hours installing operating system updates later. Microsoft could learn a lot from Apple here. Restore From a Time Machine Backup Instead of reinstalling OS X, you can choose to restore your Mac from a time machine backup. This is like restoring a system image on another operating system. You’ll need an external disk containing a backup image created on the current computer to do this. Browse the Web The Get Help Online link opens the Safari web browser to Apple’s documentation site. It’s not limited to Apple’s website, though — you can navigate to any website you like. This feature allows you to access and use a browser on your Mac even if it isn’t booting properly. It’s ideal for looking up troubleshooting information. Manage Your Disks The Disk Utility option opens the same Disk Utility you can access from within Mac OS X. It allows you to partition disks, format them, scan disks for problems, wipe drives, and set up drives in a RAID configuration. If you need to edit partitions from outside your operating system, you can just boot into the recovery environment — you don’t have to download a special partitioning tool and boot into it. Choose the Default Startup Disk Click the Apple menu on the bar at the top of your screen and select Startup Disk to access the Choose Startup Disk tool. Use this tool to choose your computer’s default startup disk and reboot into another operating system. For example, it’s useful if you have Windows installed alongside Mac OS X with Boot Camp. Add or Remove an EFI Firmware Password You can also add a firmware password to your Mac. This works like a BIOS password or UEFI password on a Windows or Linux PC. Click the Utilities menu on the bar at the top of your screen and select Firmware Password Utility to open this tool. Use the tool to turn on a firmware password, which will prevent your computer from starting up from a different hard disk, CD, DVD, or USB drive without the password you provide. This prevents people form booting up your Mac with an unauthorized operating system. If you’ve already enabled a firmware password, you can remove it from here. Use Network Tools to Troubleshoot Your Connection Select Utilities > Network Utility to open a network diagnostic tool. This utility provides a graphical way to view your network connection information. You can also use the netstat, ping, lookup, traceroute, whois, finger, and port scan utilities from here. These can be helpful to troubleshoot Internet connection problems. For example, the ping command can demonstrate whether you can communicate with a remote host and show you if you’re experiencing packet loss, while the traceroute command can show you where a connection is failing if you can’t connect to a remote server. Open a Terminal If you’d like to get your hands dirty, you can select Utilities > Terminal to open a terminal from here. This terminal allows you to do more advanced troubleshooting. Mac OS X uses the bash shell, just as typical Linux distributions do. Most people will just need to use the Reinstall Mac OS X option here, but there are many other tools you can benefit from. If the Recovery Mode files on your Mac are damaged or unavailable, your Mac will automatically download them from Apple so you can use the full recovery environment.

    Read the article

  • Three Steps to Becoming an Expert Oracle Linux System Administrator

    - by Antoinette O'Sullivan
    Oracle provides a complete system administration curriculum to take you from your initial experience of Unix to being an expert Oracle Linux system administrator. You can take these live instructor-led courses from your own desk through live-virtual events or by traveling to an education center through in-class events. Step 1: Unix and Linux Essentials This 3-day course is designed for users and administrators who are new to Oracle Linux. It will help you develop the basic UNIX skills needed to interact comfortably and confidently with the operating system. Below is a sample of the in-class events already on the schedule.  Location  Date  Delivery Language  Vivoorde, Belgium  28 October 2013  English  Berlin, Germany  15 July 2013  German  Utrecht, Netherlands  19 August 2013  Dutch  Bucarest, Romania  12 August 2013  Romanian  Ankara, Turkey  6 January 2013  Turkish  Nairobi, Kenya  5 August 2013  English  Kaduna, Nigeria  15 July 2013  English   Woodmead, South Africa  15 July 2013  English   Jakarta, Indonesia  23 September 2013  English  Petaling Jaya, Malaysia  22 July 2013  English  Makati City, Philippines  3 July 2013  English  Bangkok, Thailand  20 November 2013  English  Auckland, New Zealand  5 August 2013  English  Melbourne, Australia  12 August 2013  English  Ottawa, Montreal, Toronto, Canada  3 September 2013  English  San Francisco and San Jose, CA, United States  15 July 2013  English  Reston, VA, United States  7 August 2013  English  Edison, NJ, and King of Prussia, PA, United States  3 September 2013  English  Denver, CO, United States  25 September 2013  English  Cambridge, MA, and Roseville MN, United States  6 November 2013  English  Phoenix, AZ, and Sacramento, CA, United States  25 November 2013  English Step 2: Oracle Linux System Administration Through this 5-day course, become a knowledgeable Oracle Linux system administrator, learning how to install Oracle Linux and the benefits of Oracle's Unbreakable Enterprise Kernel and Ksplice. Below is a sample of in-class events already on the schedule.  Location  Date  Delivery Language  Vienna, Austria  1 July 2013  German  Vivoorde, Belgium  18 November 2013  English  Zagreb, Croatia  16 September 2013  Croatian  London, England  3 September 2013  English  Manchester, England  9 September 2013  English  Paris, France  29 July 2013  French  Budapest, Hungary  8 July 2013  Hungarian  Utrecht, Netherland  2 September 2013  Dutch  Warsaw, Poland  15 July 2013  Polish  Bucharest, Romania  2 December 2013  Romanian  Ankara, Turkey  7 October 2013  Turkish  Istanbul, Turkey  9 September 2013  Turkish  Nairobi, Kenya  12 August 2013  English  Petaling Jaya, Malaysia  29 July 2013  English  Kuala Lumpur, Malaysia  21 October 2013  English  Makati City, Philippines  8 July 2013  English  Singapore  24 July 2013  English  Bangkok, Thailand  26 July 2013  English  Canberra, Australia  19 August 2013  English  Melbourne, Australia  16 September 2013  English   Sydney, Australia 19 August 2013   English   Mississauga, Canada  26 August 2013  English  Ottawa, Canada  4 November 2013  English  Phoenix, AZ, United States  7 October 2013  English  Belmont, CA, United States  23 September 2013  English  Irvine, CA, United States  18 November 2013  English  Sacramento, CA, United States  19 August 2013  English  San Francisco, CA, United States  15 July 2013  English  Denver, CO, United States  19 August 2013  English  Schaumburg, IL, United States  26 August 2013  English  Indianapolis, IN, United States  14 October 2013  English  Columbia, MD, United States  30 September 2013  English  Roseville, MN, United States  19 August 2013  English  St Louis, MO, United States  7 October 2013  English  Edison, NJ, United States  28 October 2013  English  Beaverton, OR, United States  12 August 2013  English  Pittsburg, PA, United States 9 December 2013   English  Reston, VA, United States 12 August 2013   English  Brookfield, WI, United States 30 September 2013   English  Sao Paolo, Brazil 15 July 2013   Brazilian Portugese Step 3: Oracle Linux Advanced System Administration This new 3-day course is ideal for administrators who want to learn about managing resources and file systems while developing troubleshooting and advanced storage administration skills. You will learn about Linux Containers, Cgroups, btrfs, DTrace and more. Below is a sample of in-class events already on the schedule.  Location  Date  Delivery Language  Melbourne, Australia  9 October 2013  English  Roseville, MN, United States  3 September 2013  English To register for or learn more about these courses, go to http://oracle.com/education/linux. Watch this video to learn more about Oracle's operating system training.

    Read the article

  • Setup Guide for updating local system and the repository with the incremental Solaris 11.1 SRU

    - by Gurubalan
    This guide covers the steps to implement the following setup. I. Updating the local system from Solaris 11.1 to Solaris 11.1 SRU 16.5II. Setting up local system as an IPS Repository Server (HTTP interface)III. Updating the local repository with the incremental Solaris 11.1 SRU 16.5I. Updating the local system from Solaris 11.1 to Solaris 11.1 SRU 16.5We assume that the local system is currently installed with Solaris 11.1 GA and the system doesn't have internet connectivity.What I have:1. Two parts of full repo iso files downloaded from http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html. Both files are concatenated to a single file using the following command. $ cat sol-11_1-repo-full.iso-a sol-11_1-repo-full.iso-b > sol-11_1-repo-full.iso I suggest to verify the downloaded file against its md5checksum value [http://download.oracle.com/otn/solaris/11_1/md5sum.txt] using the following command digest -a md5 <file-name>  // the output of this command should match the original checksum value for that file.2. Incremental repo sol-11_1_16_5_0-incr-repo.iso downloaded from MOS [Patch 18269379: ORACLE SOLARIS 11.1.16.5.0 REPO ISO IMAGE (SPARC/X86 (64-BIT)]. You can get the checksum value of incremental repo iso by clicking the check box "show digest details" when you download the file.3. The local system IP is 192.168.10.10 & port 81 is reserved for repo serverPlease note that this repo file (either full or incremental) is common for both SPARC and X86(64BIT).Steps to update the local system: 1. #mounting s11.1 full repo iso to mnt        $ mount -F hsfs /soft/sol-11_1-repo-full.iso /mnt 2. Setting the pkg publisher to full repo source         $ pkg set-publisher -g file:///mnt/repo solaris 3. Perform the update of the packages.        $ pkg updateII. Setting up local system (Oracle Solaris 11.1) as an IPS Repository Server(HTTP interface):Please note that we have already mounted the full repo iso at /mnt    1. # copying /mnt permanently to the disk location at /s11.1        #zfs create -o atime=off -o mountpoint=/s11.1 rpool/s11.1        #rsync -aP /mnt/* /s11.1     2. #unmounting mnt         #umount /mnt3. To allow clients to access the local repository via HTTP, enable the application/pkg/server Service Management Facility (SMF) service.        svccfg -s application/pkg/server setprop pkg/inst_root=<data_source>/repo        eg: $svccfg -s application/pkg/server setprop pkg/inst_root=/s11.1/repo4. Setting port# to 81      svccfg -s application/pkg/server setprop pkg/port=<port_number>      eg: svccfg -s application/pkg/server setprop pkg/port="81"5a. Enable the pkg/server service (if the service is disabled)     $svcs pkg/server     STATE          STIME    FMRI     disabled        19:55:03 svc:/application/pkg/server:default      $svcadm enable pkg/server5b. Refresh/Restart the service, if it is already online       $svcadm refresh application/pkg/server       $svcadm restart application/pkg/server6. Setting pkg publisher on repo server and repo clients:      pkg set-publisher -G '*' -g http://<ip>:<port> solaris      eg: $pkg set-publisher -G '*' -g 'http://192.168.10.10:81' solaris7. Verify the Solaris 11.1 version from the repository         $pkgrepo list -s http://192.168.10.10:81 | grep entire         solaris   entire     0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z You will have multiple row entries if the repository is setup with incremental SRUs.III. Updating the local repository with the incremental Solaris 11.1 SRU 16.51. #mounting s11.1 incremental SRU repo iso to mnt        $ mount -F hsfs <full_path_to>/sol-11_1_sruN_bldnum_respinnum-incr-repo.iso  /mnt        $ mount -F hsfs /soft/sol-11_1_16_5_0-incr-repo.iso /mnt2. Updating the local repository        $pkgrecv -s  /mnt/repo -d /s11.1/repo '*'3. Building a Search Index    $pkgrepo -s /s11.1/repo refresh     Initiating repository refresh.4. Refresh/Restart the service       $svcadm refresh svc:/application/pkg/server       $svcadm restart svc:/application/pkg/server5. Verify the repo has the incremental SRU as well.       # pkgrepo list -s http://192.168.10.10:81 | grep entire        solaris   entire      0.5.11,5.11-0.175.1.16.0.5.0:20140218T165248Z       solaris   entire      0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z

    Read the article

  • Team matchups for Dota Bot

    - by Dan
    I have a ghost++ bot that hosts games of Dota (a warcraft 3 map that is played 5 players versus 5 players) and I'm trying to come up with good formulas to balance the players going into a match based on their records (I have game history for several thousand games). I'm familear with some of the concepts required to match up players, like confidence based on sample size of the number of games they played, and also perameter approximation and degrees of freedom and thus throwing out any variables that don't contribute enough to the r^2. My bot collects quite a few variables for each player from each game: The Important ones: Win/Lose/Game did not finish # of Player Kills # of Player Deaths # of Kills player assisted The not so important ones: # of enemy creep kills # of creep sneak attacks # of neutral creep kills # of Tower kills # of Rax kills # of courier kills Quick explination: The kills/deaths don't determine who wins, but the gold gained and lost from this usually is enough to tilt the game. Tower/Rax kills are what the goal of the game is (once a team looses all their towers/rax their thrown can be attacked if that is destroyed they lose), but I don't really count these as important because it is pretty random who gets the credit for the tower kill, and chances are if you destroy a tower it is only because some other player is doing well and distracting the otherteam elsewhere on the map. I'm getting a bit confused when trying to deal with the fact that 5 players are on a team, so ultimately each individual isn't that responsible for the team winner or losing. Take a player that is really good at killing and has 40 kills and only 10 deaths, but in their 5 games they've only won 1. Should I give him extra credit for such a high kill score despite losing? (When losing it is hard to keep a positive kill/death ratio) Or should I dock him for losing assuming that despite the nice kill/death ratio he probably plays in a really greedy way only looking out for himself and not helping the team? Ultimately I don't think I have to guess at questions like this because I have so much data... but I don't really know how to look at the data to answer questions like this. Can anyone help me come up with formulas to help team balance and predict the outcome? Thanks, Dan

    Read the article

  • Configure IPv6 on your Linux system (Ubuntu)

    After the presentation on IPv6 at the first event of the Emtel Knowledge Series and some recent discussion on social media networks with other geeks and Linux interested IT people here in Mauritius, I thought that I should give it a try (finally) and tweak my local network infrastructure. Honestly, I have been to busy with contractual project work and it never really occurred to me to set up IPv6 in my LAN. Well, the following paragraphs are going to shed some light on those aspects of modern computer and network technology. This is the first article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Let's embrace IPv6 The basic configuration on Linux is actually very simple as the kernel, operating system, and user-space programs support that protocol natively. If your system is ready to go for IP (aka: IPv4), then you are good to go for anything else. At least, I didn't have to install any additional packages on my system(s). We are going to assign a static IPv6 address to the system. Hence, we have to modify the definition of interfaces and check whether we have an inet6 entry specified. Open your favourite text editor and check the following entries (it should be at least similar to this): $ sudo nano /etc/network/interfaces auto eth0# IPv4 configurationiface eth0 inet static  address 192.168.1.2  network 192.168.1.0  netmask 255.255.255.0  broadcast 192.168.1.255# IPv6 configurationiface eth0 inet6 static  pre-up modprobe ipv6  address 2001:db8:bad:a55::2  netmask 64 Of course, you might have to adjust your interface device (eth0) or you might be interested to have multiple directives for additional devices (eth1, eth2, etc.). The auto instruction takes care that your device is enabled and configured during the booting phase. The use of the pre-up directive depends on your kernel configuration but in most scenarios this might be an optional line. Anyways, it doesn't hurt to have it enabled after all - just to be on the safe side. Next, either restart your network subsystem like so: $ sudo service networking restart Or you might prefer to do it manually with identical parameters, like so: $ sudo ifconfig eth0 inet6 add 2001:db8:bad:a55::2/64 In case that you're logged in remotely into your PC (ie. via ssh), it is highly advised to opt for the second choice and add the device manually. You can check your configuration afterwards with one of the following commands (depends on whether it is installed): $ sudo ifconfig eth0eth0      Link encap:Ethernet  HWaddr 00:21:5a:50:d7:94            inet addr:192.168.160.2  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: fe80::221:5aff:fe50:d794/64 Scope:Link          inet6 addr: 2001:db8:bad:a55::2/64 Scope:Global          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 $ sudo ip -6 address show eth03: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000    inet6 2001:db8:bad:a55::2/64 scope global        valid_lft forever preferred_lft forever    inet6 fe80::221:5aff:fe50:d794/64 scope link        valid_lft forever preferred_lft forever In both cases, it confirms that our network device has been assigned a valid IPv6 address. That's it in general for your setup on one system. But of course, you might be interested to enable more services for IPv6, especially if you're already running a couple of them in your IP network. More details are available on the official Ubuntu Wiki. Continue to configure your network to provide IPv6 address information automatically in your local infrastructure.

    Read the article

  • Web part error on Team Foundation Server Project Portal Sharepoint site...

    - by user304671
    Hi all, I recently installed TFS 2010 with the included sharepoint services on a single server. I am getting the following error multiple times on the project dashboard web page on the TFS2010 project protal after creating a brand new project in the default project collection. I am not an expert with WSS, so any guidance will be greatly appreciated. After reading a few articles I understand there are some DLL that are probably not declared as safe in a web config file. But I am not sure which DLL they are and where the web config file is. I looked at IIS to determine the directory structure in IIS is quiet different to the URL path ...Thanks very much. Web Part Error: A Web Part or Web Form Control on this Page cannot be displayed or imported. The type is not registered as safe. Thanks in advance..

    Read the article

  • Speed up executable program Linux. Bit Toggling

    - by AK_47
    I have a ZyBo circuit board which has a ArmV7 processor. I wrote a C program to output a clock and a corresponding data sequence on a PMOD. The PMOD has a switching speed of up to 50MHz. However, my program's created clock only has a max frequency of 115 Hz. I need this program to output as fast as possible because the PMOD I'm using is capable of 50MHz. I compiled my program with the following code line: gcc -ofast (c_program) Here is some sample code: #include <stdio.h> #include <stdlib.h> #define ARRAYSIZE 511 //________________________________________ //macro for the SIGNAL PMOD //________________________________________ //DATA //ZYBO Use Pin JE1 #define INIT_SIGNAL system("echo 54 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio54/direction"); #define SIGNAL_ON system("echo 1 > /sys/class/gpio/gpio54/value"); #define SIGNAL_OFF system("echo 0 > /sys/class/gpio/gpio54/value"); //________________________________________ //macro for the "CLOCK" PMOD //________________________________________ //CLOCK //ZYBO Use Pin JE4 #define INIT_MYCLOCK system("echo 57 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio57/direction"); #define MYCLOCK_ON system("echo 1 > /sys/class/gpio/gpio57/value"); #define MYCLOCK_OFF system("echo 0 > /sys/class/gpio/gpio57/value"); int main(void){ int myarray[ARRAYSIZE] = {//hard coded array for signal data 1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,0,0,0,1,0,0,1,1,1,0,0,1,1,1,0,1,1,1,1,0,0,1,0,0,0,1,0,1,0,0,1,1,1,0,0,1,0,1,0,1,0,0,1,0,1,1,0,1,0,1,1,0,0,1,1,1,1,0,0,1,0,1,0,0,1,1,1,1,1,1,0,0,1,0,0,1,1,0,1,0,0,0,0,1,0,0,0,1,1,0,0,1,0,1,1,1,0,0,0,1,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,1,1,1,0,1,1,0,1,0,0,1,0,0,0,1,0,1,0,0,1,0,0,0,1,0,0,0,1,0,1,0,1,0,1,0,1,1,0,0,0,0,0,0,0,0,1,0,1,1,0,1,1,1,1,1,0,0,1,1,1,0,0,1,1,0,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,1,0,1,0,1,0,1,1,0,1,0,0,0,1,1,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,0,0,0,0,1,1,1,0,1,1,1,1,0,1,1,0,1,0,1,0,1,0,0,1,0,1,1,1,0,1,1,1,0,0,1,1,1,0,1,0,0,1,0,1,1,1,1,1,0,1,1,1,1,1,1,1,0,1,1,0,0,1,0,1,1,0,1,0,1,1,1,0,0,0,0,0,1,0,0,0,1,0,1,1,1,1,1,1,1,0,0,0,0,0,1,1,0,1,1,1,1,1,1,1,1,0,1,1,0,1,0,0,0,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,1,0,1,1,1,0,0,0,1,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,1,0,0,1,0,1,1,1,1,1,1,0,1,1,0,1,0,1,1,1,1,1,1,0,0,1,1,0,1,1,0,0,1,1,0,1,1,0,1,0,1,0,1,0,1,0,0,1,1,1,0,1,1,0,0,0,0,1,1,0,1,1,0,1,1,1,1,1,1,1,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0 }; INIT_SIGNAL INIT_MYCLOCK; //infinite loop int i; do{ i = 0; do{ /* 1020 is chosen because it is twice the size needed allowing for the changes in the clock. (511= 0-510, 510*2= 1020 ==> 0-1020 needed, so 1021 it is) */ if((i%2)==0) { MYCLOCK_ON; if(myarray[i/2] == 1){ SIGNAL_ON; }else{ SIGNAL_OFF; } } else if((i%2)==1) { MYCLOCK_OFF; //dont need to change the signal since it will just stay at whatever it was. } ++i; } while(i < 1021); } while(1); return 0; } I'm using the 'system' call to tell the system to output 1 volt or 0 volts onto a pin on the board (to represent the data signal and clock signal. One pin for the data and another for the clock). That was the only way I knew to tell the system to output a voltage. What can I do to make my executable program output to be at least in the magnitude of MegaHertz?

    Read the article

  • How can I copy files in the middle of a build in Team System?

    - by Dana
    I have two solutions that I want to include in a build. Solution two requires the dll's from solution one to successfully build. Solution two has a Binaries folder where the dll's from solution one need to be copied before building Solution two. I've been trying an AfterBuild Target, hoping that it would copy the items after the first SolutionToBuild, but it doesn't fire then. I'm guessing that it would probably fire after both solutions have compiled, but that's not what I want. <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Main/Framework.sln"> <Targets>AfterCompileFramework</Targets> <Properties></Properties> </SolutionToBuild> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../../Dashboard/Main/Dashboard.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> <ItemGroup> <FrameworkBinaries Include="$(DropLocation)\$(BuildNumber)\Release\Framework.*.dll"/> </ItemGroup> <Message Text="FrameworkBinaries: @(FrameworkBinaries)" Importance="high"/> <Copy SourceFiles="@(FrameworkBinaries)" DestinationFolder="$(BuildProjectFolderPath)/../../../Dashboard/Main/Binaries"/>

    Read the article

  • How to convert this procedural programming to object-oriented programming?

    - by manus91
    I have a source code that is needed to be converted by creating classes, objects and methods. So far, I've just done by converting the initial main into a separate class. But I don't know what to do with constructor and which variables are supposed to be private. This is the code : import java.util.*; public class Card{ private static void shuffle(int[][] cards){ List<Integer> randoms = new ArrayList<Integer>(); Random randomizer = new Random(); for(int i = 0; i < 8;) { int r = randomizer.nextInt(8)+1; if(!randoms.contains(r)) { randoms.add(r); i++; } } List<Integer> clonedList = new ArrayList<Integer>(); clonedList.addAll(randoms); Collections.shuffle(clonedList); randoms.addAll(clonedList); Collections.shuffle(randoms); int i=0; for(int r=0; r < 4; r++){ for(int c=0; c < 4; c++){ cards[r][c] = randoms.get(i); i++; } } } public static void play() throws InterruptedException { int ans = 1; int preview; int r1,c1,r2,c2; int[][] cards = new int[4][4]; boolean[][] cardstatus = new boolean[4][4]; boolean gameover = false; int moves; Scanner input = new Scanner(System.in); do{ moves = 0; shuffle(cards); System.out.print("Enter the time(0 to 5) in seconds for the preview of the answer : "); preview = input.nextInt(); while((preview<0) || (preview>5)){ System.out.print("Invalid time!! Re-enter time(0 - 5) : "); preview = input.nextInt(); } preview = 1000*preview; System.out.println(" "); for (int i =0; i<4;i++){ for (int j=0;j<4;j++){ System.out.print(cards[i][j]); System.out.print(" "); } System.out.println(""); System.out.println(""); } Thread.sleep(preview); for(int b=0;b<25;b++){ System.out.println(" "); } for(int r=0;r<4;r++){ for(int c=0;c<4;c++){ System.out.print("*"); System.out.print(" "); cardstatus[r][c] = false; } System.out.println(""); System.out.println(" "); } System.out.println(""); do{ do{ System.out.print("Please insert the first card row : "); r1 = input.nextInt(); while((r1<1) || (r1>4)){ System.out.print("Invalid coordinate!! Re-enter first card row : "); r1 = input.nextInt(); } System.out.print("Please insert the first card column : "); c1 = input.nextInt(); while((c1<1) || (c1>4)){ System.out.print("Invalid coordinate!! Re-enter first card column : "); c1 = input.nextInt(); } if(cardstatus[r1-1][c1-1] == true){ System.out.println("The card is already flipped!! Select another card."); System.out.println(""); } }while(cardstatus[r1-1][c1-1] != false); do{ System.out.print("Please insert the second card row : "); r2 = input.nextInt(); while((r2<1) || (r2>4)){ System.out.print("Invalid coordinate!! Re-enter second card row : "); r2 = input.nextInt(); } System.out.print("Please insert the second card column : "); c2 = input.nextInt(); while((c2<1) || (c2>4)){ System.out.print("Invalid coordinate!! Re-enter second card column : "); c2 = input.nextInt(); } if(cardstatus[r2-1][c2-1] == true){ System.out.println("The card is already flipped!! Select another card."); } if((r1==r2)&&(c1==c2)){ System.out.println("You can't select the same card twice!!"); continue; } }while(cardstatus[r2-1][c2-1] != false); r1--; c1--; r2--; c2--; System.out.println(""); System.out.println(""); System.out.println(""); for(int r=0;r<4;r++){ for(int c=0;c<4;c++){ if((r==r1)&&(c==c1)){ System.out.print(cards[r][c]); System.out.print(" "); } else if((r==r2)&&(c==c2)){ System.out.print(cards[r][c]); System.out.print(" "); } else if(cardstatus[r][c] == true){ System.out.print(cards[r][c]); System.out.print(" "); } else{ System.out.print("*"); System.out.print(" "); } } System.out.println(" "); System.out.println(" "); } System.out.println(""); if(cards[r1][c1] == cards[r2][c2]){ System.out.println("Cards Matched!!"); cardstatus[r1][c1] = true; cardstatus[r2][c2] = true; } else{ System.out.println("No cards match!!"); } Thread.sleep(2000); for(int b=0;b<25;b++){ System.out.println(""); } for(int r=0;r<4;r++){ for(int c=0;c<4;c++){ if(cardstatus[r][c] == true){ System.out.print(cards[r][c]); System.out.print(" "); } else{ System.out.print("*"); System.out.print(" "); } } System.out.println(""); System.out.println(" "); } System.out.println(""); System.out.println(""); System.out.println(""); gameover = true; for(int r=0;r<4;r++){ for( int c=0;c<4;c++){ if(cardstatus[r][c]==false){ gameover = false; break; } } if(gameover==false){ break; } } moves++; }while(gameover != true); System.out.println("Congratulations, you won!!"); System.out.println("It required " + moves + " moves to finish it."); System.out.println(""); System.out.print("Would you like to play again? (1=Yes / 0=No) : "); ans = input.nextInt(); }while(ans == 1); } } The main class is: import java.util.*; public class PlayCard{ public static void main(String[] args) throws InterruptedException{ Card game = new Card(); game.play(); } } Should I simplify the Card class by creating other classes? Through this code, my javadoc has no constructtor. So i need help on this!

    Read the article

  • New TFS Template Available - "Agile Dev in a Waterfall Environment"–GovDev

    - by Hosam Kamel
      Microsoft Team Foundation Server (TFS) 2010 is the collaboration platform at the core of Microsoft’s application lifecycle management solution. In addition to core features like source control, build automation and work-item tracking, TFS enables teams to align projects with industry processes such as Agile, Scrum and CMMi via the use of customable XML Process Templates. Since 2005, TFS has been a welcomed addition to the Microsoft developer tool line-up by Government Agencies of all sizes and missions. However, many government development teams consistently struggle with leveraging an iterative development process all while providing the structure, visibility and status reporting that is required by many Government, waterfall-centric, project methodologies. GovDev is an open source, TFS Process Template that combines the formality of CMMi/Waterfall with the flexibility of Agile/Iterative: The GovDev for TFS Accelerator also implements two new custom reports to support the customized process and provide the real-time visibility across the lifecycle with full traceability and drill down to tasks, tests and code: The TFS Accelerator contains: A custom TFS process template that implements a requirements centric, yet iterative process with extreme traceability throughout the lifecycle. A custom “Requirements Traceability Report” that provides a single view of traceability for the project.   Within the Traceability Report, you can also view live status indicators and “click-through” to the individual assets (even changesets). A custom report that focuses on “Contributions by Team Member” tracking things like “number of check-ins” and “Net lines added”.  Fully integrated documentation on the entire process and features. For a 45min demo of GovDev, visit: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032508359&culture=en-us Download it from Codeplex here.     Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • How can I back up my ubuntu system?

    - by Eloff
    I'm sure there's a lot of questions on here similar to this, and I've been reading them, but I still feel this warrants a new question. I want nightly, incremental backups (full disk images would waste a lot of space - unless compressed somehow.) Preferably rotating or deleting old backups when running out of space or after a fixed number of backups. I want to be able to quickly and painlessly restore my system from these backups. This is my first time running ubuntu as my main development machine and I know from my experience with it as a server and in virtual machines that I regularly manage to make it unbootable or damage it to the point of being unable to rescue it. So how would you recommend I do this? There are so many options out there I really don't know where to start. There seems to be a vocal school of thought that it's sufficient to backup your home directory and the list of installed packages from the package manager. I've already installed lots of things from source, or outside of the package manager (development tools, ides, compilers, graphics drivers, etc.) So at the very least, if I do not back up the operating system itself I need to grab all config files, all program binaries, all created but required files, etc. I'd rather backup too much than too little - an ubuntu install is tiny anyway. Also this drastically reduces the restore time, which would cost me more in my time than the extra storage space. I tried using Deja Dup to backup the root partition, excluding some things like /mnt /media /dev /proc etc. Although many websites assured me you can backup a running linux system this way - that seems to be false as it complained that it could not backup the following files: /boot/System.map-3.0.0-17-generic /boot/System.map-3.2.0-22-generic /boot/vmcoreinfo-3.0.0-17-generic /boot/vmlinuz-3.0.0-17-generic /boot/vmlinuz-3.2.0-22-generic /etc/.pwd.lock /etc/NetworkManager/system-connections/LAN Connection /etc/apparmor.d/cache/lightdm-guest-session /etc/apparmor.d/cache/sbin.dhclient /etc/apparmor.d/cache/usr.bin.evince /etc/apparmor.d/cache/usr.lib.telepathy /etc/apparmor.d/cache/usr.sbin.cupsd /etc/apparmor.d/cache/usr.sbin.tcpdump /etc/apt/trustdb.gpg /etc/at.deny /etc/ati/inst_path_default /etc/ati/inst_path_override /etc/chatscripts /etc/cups/ssl /etc/cups/subscriptions.conf /etc/cups/subscriptions.conf.O /etc/default/cacerts /etc/fuse.conf /etc/group- /etc/gshadow /etc/gshadow- /etc/mtab.fuselock /etc/passwd- /etc/ppp/chap-secrets /etc/ppp/pap-secrets /etc/ppp/peers /etc/security/opasswd /etc/shadow /etc/shadow- /etc/ssl/private /etc/sudoers /etc/sudoers.d/README /etc/ufw/after.rules /etc/ufw/after6.rules /etc/ufw/before.rules /etc/ufw/before6.rules /lib/ufw/user.rules /lib/ufw/user6.rules /lost+found /root /run/crond.reboot /run/cups/certs /run/lightdm /run/lock/whoopsie/lock /run/udisks /var/backups/group.bak /var/backups/gshadow.bak /var/backups/passwd.bak /var/backups/shadow.bak /var/cache/apt/archives/lock /var/cache/cups/job.cache /var/cache/cups/job.cache.O /var/cache/cups/ppds.dat /var/cache/debconf/passwords.dat /var/cache/ldconfig /var/cache/lightdm/dmrc /var/crash/_usr_lib_x86_64-linux-gnu_colord_colord.102.crash /var/lib/apt/lists/lock /var/lib/dpkg/lock /var/lib/dpkg/triggers/Lock /var/lib/lightdm /var/lib/mlocate/mlocate.db /var/lib/polkit-1 /var/lib/sudo /var/lib/urandom/random-seed /var/lib/ureadahead/pack /var/lib/ureadahead/run.pack /var/log/btmp /var/log/installer/casper.log /var/log/installer/debug /var/log/installer/partman /var/log/installer/syslog /var/log/installer/version /var/log/lightdm/lightdm.log /var/log/lightdm/x-0-greeter.log /var/log/lightdm/x-0.log /var/log/speech-dispatcher /var/log/upstart/alsa-restore.log /var/log/upstart/alsa-restore.log.1.gz /var/log/upstart/console-setup.log /var/log/upstart/console-setup.log.1.gz /var/log/upstart/container-detect.log /var/log/upstart/container-detect.log.1.gz /var/log/upstart/hybrid-gfx.log /var/log/upstart/hybrid-gfx.log.1.gz /var/log/upstart/modemmanager.log /var/log/upstart/modemmanager.log.1.gz /var/log/upstart/module-init-tools.log /var/log/upstart/module-init-tools.log.1.gz /var/log/upstart/procps-static-network-up.log /var/log/upstart/procps-static-network-up.log.1.gz /var/log/upstart/procps-virtual-filesystems.log /var/log/upstart/procps-virtual-filesystems.log.1.gz /var/log/upstart/rsyslog.log /var/log/upstart/rsyslog.log.1.gz /var/log/upstart/ureadahead.log /var/log/upstart/ureadahead.log.1.gz /var/spool/anacron/cron.daily /var/spool/anacron/cron.monthly /var/spool/anacron/cron.weekly /var/spool/cron/atjobs /var/spool/cron/atspool /var/spool/cron/crontabs /var/spool/cups

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by ken.pulverman
      ERP has been the backbone of enterprise software.  The data held in your ERP system is core of most companies.  Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years.  Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor.   Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough."  So your ERP is working.  It's humming along.  You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere.   Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on.  And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™.   SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications.  Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise.  In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure the Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution.  It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do.  The diagram below reflects that change.    In this model, the hegemony of ERP is over.  It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information.  And through modern middleware it will connect to everything.  So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by [email protected]
    By ken.pulverman on March 24, 2010 8:51 AM ERP has been the backbone of enterprise software. The data held in your ERP system is core of most companies. Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years. Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor. Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough." So your ERP is working. It's humming along. You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere. Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on. And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™. SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications. Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise. In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure that Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution. It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do. The diagram below reflects that change. In this model, the hegemony of ERP is over. It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information. And through modern middleware it will connect to everything. So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • Which web site gives the most accurate indication of a programmer's capabilities?

    - by Jerry Coffin
    If you were hiring programmers, and could choose between one of (say) the top 100 coders on topcoder.com, or one of the top 100 on stackoverflow.com, which would you choose? At least to me, it would appear that topcoder.com gives a more objective evaluation of pure ability to solve problems and write code. At the same time, despite obvious technical capabilities, this person may lack any hint of social skills -- he may be purely a "lone coder", with little or no ability to help/work with others, may lack mentoring ability to help transfer his technical skills to others, etc. On the other hand, stackoverflow.com would at least appear to give a much better indication of peers' opinion of the coder in question, and the degree to which his presence and useful and helpful to others on the "team". At the same time, the scoring system is such that somebody who just throws up a lot of mediocre (or even poor answers) will almost inevitably accumulate a positive total of "reputation" points -- a single up-vote (perhaps just out of courtesy) will counteract the effects of no fewer than 5 down-votes, and others are discouraged (to some degree) from down-voting because they have to sacrifice their own reputation points to do so. At the same time, somebody who makes little or no technical contribution seems unlikely to accumulate a reputation that lands them (even close to) the top of the heap, so to speak. So, which provides a more useful indication of the degree to which this particular coder is likely to be useful to your organization? If you could choose between them, which set of coders would you rather have working on your team?

    Read the article

  • Moving all UI logic to Client Side?

    - by Mag20
    Our team originally consisted of mostly server side developers with minimum expertise in Javascript. In ASP.NET we used to write a lot of UI logic in code-behind or more recently through controllers in MVC. A little while ago 2 high level client side developers joined our team. They can do in HTMl/CSS/Javascript pretty much anything that we could previously do with server-side code and server-side web controls: Show/hide controls Do validation Control AJAX refreshing So I started to think that maybe it would be more efficient to just create a high level API around our business logic, kinda like Amazon Fulfillment API: http://docs.amazonwebservices.com/fws/latest/APIReference/, so that client side developers would fully take over the UI, while server side developers would only concentrate on business logic. So for ordering system you would have a high level API like: OrderService.asmx CreateOrderResponse CreateOrder(CreateOrderRequest) AddOrderItem AddPayment - SubmitPayment - GetOrderByID FindOrdersByCriteria ... There would be JSON/REST access to API, so it would be easy to consume from client-side UI. We could use this API for both internal UI development and also for 3-rd parties to create their own applications. With advances in Javascript and availability of good client side developers, is it a good time to get rid of code-behind/controllers and just concentrate on developing high level APIs (ala Amazon) that client side developers can consume?

    Read the article

  • How to correct a junior, but encourage him to think for himself? [closed]

    - by Phil
    I am the lead of a small team where everyone has less than a year of software development experience. I wouldn't by any means call myself a software guru, but I have learned a few things in the few years that I've been writing software. When we do code reviews I do a fair bit of teaching and correcting mistakes. I will say things like "This is overly complex and convoluted, and here's why," or "What do you think about moving this method into a separate class?" I am extra careful to communicate that if they have questions or dissenting opinions, that's ok and we need to discuss. Every time I correct someone, I ask "What do you think?" or something similar. However they rarely if ever disagree or ask why. And lately I've been noticing more blatant signs that they are blindly agreeing with my statements and not forming opinions of their own. I need a team who can learn to do things right autonomously, not just follow instructions. How does one correct a junior developer, but still encourage him to think for himself? Edit: Here's an example of one of these obvious signs that they're not forming their own opinions: Me: I like your idea of creating an extension method, but I don't like how you passed a large complex lambda as a parameter. The lambda forces others to know too much about the method's implementation. Junior (after misunderstanding me): Yes, I totally agree. We should not use extension methods here because they force other developers to know too much about the implementation. There was a misunderstanding, and that has been dealt with. But there was not even an OUNCE of logic in his statement! He thought he was regurgitating my logic back to me, thinking it would make sense when really he had no clue why he was saying it.

    Read the article

  • wcf callback exception after updating to .net 4.0

    - by James
    I have a wcf service that uses callbacks with DualHttpBindings. The service pushes back a datatable of search results the client (for a long running search) as it finds them. This worked fine in .Net 3.5. Since I updated to .Net 4.0, it bombs out with a System.Runtime.FatalException that actually kills the IIS worker process. I have no idea how to even go about starting to fix this. Any recommendations appreciated. The info from the resulting event log is pasted below. An unhandled exception occurred and the process was terminated. Application ID: /LM/W3SVC/2/ROOT/CP Process ID: 5284 Exception: System.Runtime.FatalException Message: Object reference not set to an instance of an object. StackTrace: at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage31(MessageRpc& rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet) at System.ServiceModel.Dispatcher.ChannelHandler.DispatchAndReleasePump(RequestContext request, Boolean cleanThread, OperationContext currentOperationContext) at System.ServiceModel.Dispatcher.ChannelHandler.HandleRequest(RequestContext request, OperationContext currentOperationContext) at System.ServiceModel.Dispatcher.ChannelHandler.AsyncMessagePump(IAsyncResult result) at System.Runtime.Fx.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result) at System.Runtime.AsyncResult.Complete(Boolean completedSynchronously) at System.Runtime.InputQueue1.AsyncQueueReader.Set(Item item) at System.Runtime.InputQueue1.Dispatch() at System.ServiceModel.Channels.ReliableDuplexSessionChannel.ProcessDuplexMessage(WsrmMessageInfo info) at System.ServiceModel.Channels.ReliableDuplexSessionChannel.HandleReceiveComplete(IAsyncResult result) at System.ServiceModel.Channels.ReliableDuplexSessionChannel.OnReceiveCompletedStatic(IAsyncResult result) at System.Runtime.Fx.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result) at System.Runtime.AsyncResult.Complete(Boolean completedSynchronously) at System.ServiceModel.Channels.ReliableChannelBinder1.InputAsyncResult1.OnInputComplete(IAsyncResult result) at System.Runtime.Fx.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result) at System.Runtime.AsyncResult.Complete(Boolean completedSynchronously) at System.Runtime.InputQueue1.AsyncQueueReader.Set(Item item) at System.Runtime.InputQueue1.Dispatch() at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) InnerException: * System.NullReferenceException* Message: Object reference not set to an instance of an object. StackTrace: at System.Web.HttpApplication.ThreadContext.Enter(Boolean setImpersonationContext) at System.Web.HttpApplication.OnThreadEnterPrivate(Boolean setImpersonationContext) at System.Web.AspNetSynchronizationContext.CallCallbackPossiblyUnderLock(SendOrPostCallback callback, Object state) at System.Web.AspNetSynchronizationContext.CallCallback(SendOrPostCallback callback, Object state) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc)

    Read the article

  • What are the common maintenance tasks on ubuntu?

    - by DaNieL
    When i was using windows, i used to run defrags, ccleaner and revouninstaller once a month to keep the system and the registry clean. I know ubuntu (and all linux distro) has a different system structure and doesnt need defrags, but i've heard there are some mainenance tasks that help to keep the system clean (for example, sudo apt-get clean or sudo apt-get autoremove) How many of those commands/software (and please explain what they do and if they can compromise the system stability) do you know and use regularly?

    Read the article

  • June 23, 1983: First Successful Test of the Domain Name System [Geek History]

    - by Jason Fitzpatrick
    Nearly 30 years ago the first Domain Name System (DNS) was tested and it changed the way we interacted with the internet. Nearly impossible to remember number addresses became easy to remember names. Without DNS you’d be browsing a web where numbered addresses pointed to numbered addresses. Google, for example, would look like http://209.85.148.105/ in your browser window. That’s assuming, of course, that a numbers-based web every gained enough traction to be popular enough to spawn a search giant like Google. How did this shift occur and what did we have before DNS? From Wikipedia: The practice of using a name as a simpler, more memorable abstraction of a host’s numerical address on a network dates back to the ARPANET era. Before the DNS was invented in 1983, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI. The HOSTS.TXT file mapped names to numerical addresses. A hosts file still exists on most modern operating systems by default and generally contains a mapping of the IP address 127.0.0.1 to “localhost”. Many operating systems use name resolution logic that allows the administrator to configure selection priorities for available name resolution methods. The rapid growth of the network made a centrally maintained, hand-crafted HOSTS.TXT file unsustainable; it became necessary to implement a more scalable system capable of automatically disseminating the requisite information. At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications were published by the Internet Engineering Task Force in RFC 882 and RFC 883, which were superseded in November 1987 by RFC 1034 and RFC 1035.Several additional Request for Comments have proposed various extensions to the core DNS protocols. Over the years it has been refined but the core of the system is essentially the same. When you type “google.com” into your web browser a DNS server is used to resolve that host name to the IP address of 209.85.148.105–making the web human-friendly in the process. Domain Name System History [Wikipedia via Wired] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >