Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 168/906 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • The plugin of munin is always timed out

    - by haoX
    I want to use munin to make a graph of ttyACM0 in Linux, but munin can not create the graph. I found some information in "munin-node.log". it shows that "Service 'temperature' timed out". So I changed timeout to 60 or 120 in /munin/plugin-conf.d/munin-node, but it does not work. It's also timed out. Here is part of my code: if [ "$1" = "config" ]; then echo 'graph_title Temperature of board' echo 'graph_args --base 1000 -l 0' echo 'graph_vlabel temperature(°C)' echo 'graph_category temperature' echo 'graph_scale no' echo 'graph_info This graph shows the temperature of board' for i in 1 2 3 4 5; do case $i in 1) TYPE="Under PCB" ;; 2) TYPE="HDD" ;; 3) TYPE="PHY" ;; 4) TYPE="CPU" ;; 5) TYPE="Ambience" ;; esac name=$(clean_name $TYPE) if [ "$TYPE" != "NA" ]; then echo "temp_$name.label $TYPE"; fi done exit 0 fi for i in 1 2 3 4 5; do case $i in 1) TYPE="Under PCB" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $1}') ;; 2) TYPE="HDD" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $2}') ;; 3) TYPE="PHY" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $3}') ;; 4) TYPE="CPU" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $4}') ;; 5) TYPE="Ambience" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $5}') ;; esac name=$(clean_name $TYPE) if [ "$TYPE" != "NA" ]; then echo "temp_$name.value $VALUE" fi

    Read the article

  • How to access original files from before a symlink gets updated, which have since been moved to another dir

    - by Luke Cousins
    We have a website and our deployment process goes somewhat like the following (with lots of irrelevant steps excluded) echo "Remove previous, if it exists, we don't need that anymore" rm -rf /home/[XXX]/php_code/previous echo "Create the current dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/php_code/current echo "Create the var_www dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/var_www echo "Copy current to previous so we can use temporarily" cp -R /home/[XXX]/php_code/current/* /home/[XXX]/php_code/previous/ echo "Atomically swap the symbolic link to use previous instead of current" ln -s /home/[XXX]/php_code/previous /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live # Rsync latest code into the current dir, code not shown here echo "Atomically swap the symbolic link to use current instead of previous" ln -s /home/[XXX]/php_code/current /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live The problem we are having and would like help with is that, the first thing any website page load does is work out the base dir of the application and define it as a constant (we use PHP). If then during that page load a deployment occurs, the system tries to include() a file using the original full path and will get the new version of that file. We need it to get the old one from the old dir which has now moved as in: System starts page load and determines SYSTEM_ROOT_PATH constant to be /home/[XXX]/var_www/live or by using PHP's realpath() it could be /home/[XXX]/php_code/current. Symlink for /home/[XXX]/var_www/live get updated to point to /home/[XXX]/php_code/previous instead of /home/[XXX]/php_code/current where it did originally. System tries to load /home/[XXX]/var_www/live/something.php and gets /home/[XXX]/php_code/current/something.php instead of /home/[XXX]/php_code/previous/something.php I'm sorry if that is not explained very well. I'd really appreciate some ideas on how to get around this problem if someone can. Thank you.

    Read the article

  • glusterfs mounts get unmounted when 1 of the 2 bricks goes offline

    - by Shiquemano
    I have an odd case where 1 of the 2 replicated glusterfs bricks will go offline and take all of the client mounts down with it. As I understand it, this should not be happening. It should fail over to the brick that is still online, but this hasn't been the case. I suspect that this is due to configuration issue. Here is a description of the system: 2 gluster servers on dedicated hardware (gfs0, gfs1) 8 client servers on vms (client1, client2, client3, ... , client8) Half of the client servers are mounted with gfs0 as the primary, and the other half are pointed at gfs1. Each of the clients are mounted with the following entry in /etc/fstab: /etc/glusterfs/datavol.vol /data glusterfs defaults 0 0 Here is the content of /etc/glusterfs/datavol.vol: volume datavol-client-0 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs0 end-volume volume datavol-client-1 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs1 end-volume volume datavol-replicate-0 type cluster/replicate subvolumes datavol-client-0 datavol-client-1 end-volume volume datavol-dht type cluster/distribute subvolumes datavol-replicate-0 end-volume volume datavol-write-behind type performance/write-behind subvolumes datavol-dht end-volume volume datavol-read-ahead type performance/read-ahead subvolumes datavol-write-behind end-volume volume datavol-io-cache type performance/io-cache subvolumes datavol-read-ahead end-volume volume datavol-quick-read type performance/quick-read subvolumes datavol-io-cache end-volume volume datavol-md-cache type performance/md-cache subvolumes datavol-quick-read end-volume volume datavol type debug/io-stats option count-fop-hits on option latency-measurement on subvolumes datavol-md-cache end-volume The config above is the latest attempt at making this behave properly. I have also tried the following entry in /etc/fstab: gfs0:/datavol /data glusterfs defaults,backupvolfile-server=gfs1 0 0 This was the entry for half of the clients, while the other half had: gfs1:/datavol /data glusterfs defaults,backupvolfile-server=gfs0 0 0 The results were exactly the same as the above configuration. Both configs connect everything just fine, they just don't fail over. Any help would be appreciated.

    Read the article

  • ssh hangs on "Last login" line

    - by Pavel H
    This happened for the first time three days ago - I ssh to the server, authenticate using a password, get the welcome message but it remains hanging on the "Last login:..." line. The command line doesn't show and the server doesn't react to my input. Other services on the server keep working ok (apache, tomcat, database, ..). The box has an out-of-band management using which I was able to restart it. After the restart the ssh worked ok again and I didn't find anything suspicious in the logs. Three days later the same problem occurs on this box again, and newly on yet another server in the cluster - 100% same symptoms. Both servers have about 2 month old installation of Debian Squeeze (6.0.2) and the problem never occurred before despite frequent ssh-ing, so it should not be a problem of settings. We haven't been installing anything new for quite some time now. I also made sure there is enough disk space on both servers. Since it started to happen all of a sudden on two servers at about the same time, I suspect some bug may have been introduced via Debian updates, yet I haven't been able to find anyone with the same problem. Most similar issues I have found: ssh freezes at the "Last Login Line" - in our case everything worked fine until recently, so nothing related to settings should be our problem. Diskspace checked, I couldn't check the memory but I would expect something would be in the logs if the system had been running out of it. Remote Fedora system unresponsive, odd but consistent behavior when trying to log in - problem with high load on the server; unlike in this case, nothing changes even if I wait for 10+ minutes

    Read the article

  • Starting programs from terminal then exiting terminal exits started programs?

    - by sherrellbc
    I really was unsure how to phrase the question title. What I mean is that when I use the terminal to start a program, most of the time when the terminal is closed it also exits the programs started from it. Now this makes sense if we look at it from a hierarchical standpoint of the terminal being the parent process which spawns child processes, and any halt of the parent causes subsequent halting of the children as well. However, I've noticed this to not always be the case. For example, I downloaded Sublime Text Editor and created a symlink in PATH. I can start this program by issuing a sublime command from the terminal, but subsequent closure of the terminal program does nothing to sublime. However, other times either the child process that was started it also closed or it hangs up and causes problems. tl;dr: Is it always the case that programs started from a closed parent process will be closed when the parent is exited? And if so, is there way to start a program from the terminal and then close the terminal without exiting the started process? The whole point here is to start programs from the terminal so I do not overly-populate my desktop with symlinks.

    Read the article

  • how do you view / access the contents of a mounted dmg drive through TERMINAL hdiutil diskmount

    - by A. O.
    My external USB drive failed. I made a .dmg image file of the drive using disk utility. Later I was not able to mount the .dmg image. I used terminal hdiutil attach -noverify -nomount name.dmg diskutil list diskutil mountDisk /dev/disk4 then received the following message: Volume(s) mounted successfully However, I cant see the drive or access its contents through Finder. DUtility shows the drive as ghost but I still cant mount it using diskutility. Terminal tells me that the drive is mounted and constantly shows it in the diskutil list. pwd is not the mounted .dmg image. I dont know how to enter into the mounted image drive to see its contents. So in case what I said sounds like I see the files in the mounted image no this is not the case. I do not know how to access or even change the pwd within Terminal. I was hoping to see the mounted drive tru finder but I do not see that. So I need help as to how to find a way to access the mounted image drive if it was really mounted. Terminal says that it was and it shows it under diskutil list as a /dev/disk4. Can someone please help me access the files on this drive?

    Read the article

  • disk write cache buffer and separate power supply

    - by HugoRune
    Windows has a setting to turn off the write-cache buffer (see image) Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. Is it feasible and economical to get such a "separate power supply" for the internal sata drives of a non-server PC? Under what name is such a power supply sold? I know that there are UPS devices that can be connected to external drives,but what is required to be able to switch this setting safely on for an internal disk? The setting has different descriptions in different version of windows Windows XP: Enable write caching on the disk This setting enables write caching in Windows to improve disk performance, but a power outage or equipment failure might result in data loss or corruption. Windows Server 2003: Enable write caching on the disk Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows Vista: Enable advanced performance Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows 7 and 8: Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. This article by Raymond Chen has some more detailed information about what the setting does.

    Read the article

  • Plug-in device to front USB computer *sometimes* restarts

    - by Mark A. Nicolosi
    I've got a strange problem that very occasionally (maybe once a month) when I plug-in something to the front USB on my computer, the computer suddently restarts. This also happens when I touch the front USB ports sometimes. This has been going on for a few years and a lot of the components in my PC have changed. I thought it was my home wiring, but I moved last year and it still happened. I thought maybe it was the motherboard, but that was upgraded 9 months ago and it still happens. I thought it was my case, but I changed that recently and it still happens. I thought maybe it was my PSU, but I upgraded that yesterday and it still happens. I'm pretty sure this is an electro-static thing, but I thought that modern computers have protection against this sort of thing. Maybe I should move my case off the floor (carpet) and stop wearing songs all the time. Edit: Just to clarify this is a computer that I built. The components have been upgraded throughout the years and it's not much the same computer anymore. This doesn't happen very often, but it is annoying, because I don't know what the cause is. Anyone have any ideas?

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • Is there any way to shut up my ATI HD 5770?

    - by slpsys
    So to preface, I basically built Jeff's machine; I already had some of the components, including (scarily enough) the exact same case1. I've been buying bits and pieces over the past few months, which coincided perfectly with his recent post about three monitors, though not being a gamer outright, I opted for the second-from-the-bottom option. After finally plopping all the pieces lovingly into the case this evening, I turn it on...and it sounds like four professional grade hair-driers. Some quick regression analysis determined that with the video card out, the running machine sounded no louder than our house's vents. Basically, my last desktop build included a $45-at-the-time graphics card, and it's been Macbook Pros and workstations since then, so I have zero idea whether I'll just be able to tune the fan speed later on. Will I be able to get this thing to quiet down every time I'm not playing Modern Warfare 2 at maximum framerate, or should I just send this thing back now, and get the quietest card in my pricerange? 1 One thing of note is that I do not have noise-absorbing foam in the case, as is pictured in the article. I'm only mentioning that because I suspect it could drop the overall output a few decibels, but obviously not that many.

    Read the article

  • Windows 7 not booting after failed SRT (SSD caching) install

    - by david
    This is a fairly new computer, only about a month old. i7 2700k, z68 motherboard, with a 1.5tb WD black HD, and a 128gb crucial M4 ssd. I followed the instructions for setting up ssd caching, the SATA controller was set to RAID, I installed the intel software and enabled acceleration and it said everything went fine. But when I went to reboot, I received the lovely "Reboot and Select proper Boot device" error message. I checked the bios, and it was booting from the correct HD (I tried the only other option anyway just in case, it was the ~50 odd gb of unformatted space left on the SSD) AFter that I entered the raid until (ctrl-i at boot) and removed the acceleration and deleted the raid array (because it was being used as a cache this was non destructive) Still no boot. So I reinstalled win7 directly on the SSD, booted, and checked the HDD to make sure it hadn't been wiped. It hadn't, all the files were still there, including all the windows stuff. I backed up my data to an external drive just in case, but I'd really like to get this install booting again. I trawled the webs a bit, and have tried entering recovery mode and using the bootrec.exe and bootsect.exe to fix it, but to be honest I'm not sure what I'm doing with those. My question is basically: How do I make my harddrive bootable again?

    Read the article

  • Is there a way to use VirtualBox without using it's resource registry?

    - by Catskul
    Summary VirtualBox seems to want everything to be "registered" which makes it much more annoying to work with on the command line. I'm attempting to create an automated script which will create, move, start, stop, and destroy virtual machines and virtual disks. Requiring registration will complicate the task for the following reasons. leaves state information around that can cause unpredicted edgecases causing scripts to fail. creates potential name space collisions for multiple process creating VMs with the same name moving/copying resources on the same machine is more complicated because references in the registry need to be updated copying resources (disk + vm combination) to another machine require reconfiguration once they reach their target machine, and require the transfer of extra meta data to do the reconfiguration. If something unexpectedly fails, and an unregister thus fails to happen, left over configuration information can cause problems in subsequent runs. Use Case My specific use case is for a continuous integration server which creates and destroys VMs and Disk images potentially with the same name, and would require more logic to deal with the registry's statefulness. Imaginary Example It seems that I should just be able to for example (using some imaginary and/or incorrect commands): mkdir foobar customdiskimg_script ./foo/foo.vdi vboxmanage createvm --name "foo" --ostype Linux --basefolder ./foo/foo.xml vboxmanage storagectl ./foo/foo.xml --name foo --add ide vboxmanage storageattach --storagectl foo --medium ./foo/foo.vdi ./foo/foo.xml vboxmanage startvm ./foo/foo.xml TLDR Is there a way to use virtualbox without "registering" harddisks and VMs?

    Read the article

  • How do I setup routing for two companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: Two companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants its own Internet connection (ISP A) and company B wants its own Internet connection (ISP B). VLANs are deployed internally to separate the two companies' networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I set up policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • Strange Behaviour with Unicode Characters in Windows

    - by open_sourse
    Ok, I do not know if this is a programming question, but it certainly is a technical one so I am asking it here. I was working on some internationalization stuff in my PHP code, and in order to ensure that my generated HTML shows up Unicode correctly based on the encoding and stuff I decided to add some Chinese text to my PHP page, which then echoes it into the browser to complete my test case. So I went into google and typed "Chinese", copied the first Chinese text that the search returned (which was ??/??). I then copied it into Notepad++ which is my editor, and to my surprise showed up as boxes similar to [][]/[][]. So I thought the encoding in Notepad++ was messed up and I changed the encoding to UTF-8 and UCS, neither worked. I did it fresh in a newly encoded file, still I got the boxes. The same content when I paste into Google and StackOverFlow (like I did in this posting) shows up correct Chinese! I even opened up Windows Clipboard Viewer and the content is represented in the Clipboard as boxes! I tried pasting it into Windows Explorer address bar and using to rename a file to, but I still get boxes. But it shows up correctly when pasted into my Chrome Browser address bar! Is this a Windows issue? Since I am able to paste it correctly in SO, the data in memory should be encoded correctly right? But if that is the case why does it show up as boxes in the Clipboard Viewer? I am confused here...By the way I am using Windows XP with SP3. (I am asking this question here, even if it is not programmatic, because it is preventing me from running my programming test cases..)

    Read the article

  • Windows7: Howto force a "do you really want to shutdown?" dialog

    - by Vokuhila-Oliba
    Sometimes I wanted to choose "logout current user", but then I hit "Shutdown" by accident. Nearly everywhere else Windows7 is asking "do you really want to do this Yes/No". But that's not the case when I hit the "Shutdown" button. Windows7 shuts down immediately without giving me the chance to correct my mistake. Q1: So I am wondering why Windows7 shuts down immediately without asking "really do that?" in this case. Q2: Is there a way to change this behavior? E.g. Could I force Windows7 to display a dialog asking "Do you really want to shutdown ?". ADDED: I am running Windows7 Professional 64bit. ADDED: I tried to change this behavior with the policy editor. It seems to be very easy to completely remove the Shutdown button from the start menu. But I couldn't find an entry to turn on such a Yes/No dialog.

    Read the article

  • Laptop CPU fan not working properly

    - by Smith
    My laptop's CPU fan is suddenly acting very bizarre, and I am at a loss for what to do so I am asking for some help. Specifically, the fan is not even starting to spin until the CPU reaches 60 degrees Celsius (checked through HWmonitor). Once it is on, however, it properly stays on even when the CPU gets back to idle temps around 38 Celsius. Before, the fan was simply starting with the computer and staying on like normal. I've checked this for consistency both by allowing cold boots to ramp up temperature to 60 Celsius (the slow climb also causing the laptop to become unreasonably warm near the heatsink), and also running Prime95 immediately to kickstart the fan (this works every time). The fan seems to be stationary when turning the computer on from either sleep or shutdown. The fan will start at POST very briefly, and then stop completely. I've checked the BIOS for a SmartFan setting but haven't found any. I've opened the case to check for dust or debris and have not found anything (I applied some canned air to the area just in case). The laptop is a Lenovo ThinkPad Edge E430 and the CPU is an Intel Core i5 3210m @ 2.5 GHz. Any advice would really be appreciated.

    Read the article

  • securing communication between 2 Linux servers on local network for ports only they need access to

    - by gkdsp
    I have two Linux servers connected to each other via a cross-connect cable, forming a local network. One of the servers presents a DMZ for the other server (e.g. database server) that must be very secure. I'm restricting this question to communication between the two servers for ports that only need to be available to these servers (and no one else). Thus, communication between the two servers can be established by: (1) opening the required port(s) on both servers, and authenticating according to the applications' rules. (2) disabling IP Tables associated with the NIC cards the cross-connect cable is attached to (on both servers). Which method is more secure? In the first case, the needed ports are open to the external world, but protected by user name and password. In the second case, none of the needed ports are open to the outside world, but since the IP Tables are disabled for the NIC cards associated with the cross-connect cables, essentially all of the ports may be considered to be "open" between the two servers (and so if the server creating the DMZ is compromized, the hacker on the DMZ server could view all ports open using the cross-connect cable). Any conventional wisdom how to make the communication secure between two servers for ports only these servers need access to?

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • Windows 7: resizing the 'Save As' window

    - by Mark Miller
    I do not know whether my question is appropriate for this forum. Apologies if it is not. I am running Windows 7 Professional with Service Pack 1 on a Dell Vostro 460 PC. I am downloading journal articles from the internet and saving them as *.pdf files. Somehow I unintentionally clicked a button that resulted in the 'Save as' window filling the entire screen of my computer, except for the toolbar at the very bottom. How can I resize the 'Save as' window so it only fills perhaps somewhere around 25% of the computer screen, or whatever the default size for that window is? I have searched the internet extensively and found one or two threads about this problem, but no specific solutions were posted there. One suggestion was to grab the bottom of the window with the mouse and scroll upward until the window was the desired size. That does not appear to be possible in my case. Another suggestion was to click on the window with the middle mouse button before attempting to resize the window, but that does not appear to help in my case either. Thank you for any help. If I should post this question in a different forum here please let me know, or kindly migrate the question to the appropriate forum. If additional information is necessary before a solution can be attempted, please let me know that as well.

    Read the article

  • Sudo won't execute command as another user

    - by TOdorus
    I'm trying to get a unicorn server to start when the server boots. I've created a shell script which works if I log as the ubuntu user and run /etc/init.d/unicorn start Shell script #!/bin/sh case "$1" in start) cd /home/ubuntu/projects/asbest/current/ unicorn_rails -c /home/ubuntu/projects/asbest/current/config/unicorn.rb -D -E production ;; stop) if ps aux | awk '{print $2 }' | grep `cat ~/projects/asbest/current/tmp/pids/unicorn.pid`> /dev/null; then kill `cat ~/projects/asbest/current/tmp/pids/uni$ ;; restart) $0 stop $0 start ;; esac When I rebooted the server I noticed that the unicorn server wasn't listening to a socket. Since I ran the code succesfully as the ubuntu user I modified the script to let it always use the ubuntu user via sudo. #!/bin/sh case "$1" in start) cd /home/ubuntu/projects/asbest/current/ sudo -u ubuntu unicorn_rails -c /home/ubuntu/projects/asbest/current/config/unicorn.rb -D -E production ;; stop) if ps aux | awk '{print $2 }' | grep `cat ~/projects/asbest/current/tmp/pids/unicorn.pid`> /dev/null; then sudo -u ubuntu kill `cat ~/projects/asbest/current/tmp/pids/uni$ ;; restart) $0 stop $0 start ;; esac After rebooting unicorn still wouldn't start, so I tried running the script from the command line. Now I get the following error sudo: unicorn_rails: command not found I've searched high and low to what could cause this, but I'm afraid I've tapped my limited understanding of Linux. From what I can understand is that although sudo should use the ubuntu user to execute the commands, it still uses the environment of the root user, which isn't configured to run ruby or unicorn. Does anybody have any experience with this?

    Read the article

  • Performance of external USB disk with ESXi5

    - by PeterMmm
    I have a new HP DL120 G7 server with ESXi5. One VM is a Win2003 instalation and I have an external USB2.0 drive attached by USB Controller and USB Device. I copy a 4GB file from external USB to server disk. In the VM that takes up to 10 minutes. On a native Win2003 that takes aprox. 3 minutes. I have no explaination for that diference: In any case the bottleneck is the USB connection, much slower than the disks (SAS, RAID1). If the USB connection on the VM would be USB1.1 and not USB2.0 it would take much more time. (The disk performance between server partitions on the VM is correct. - see update) Could be that my native box is extremely fast and the VM is the normal case. ??? Update I try with passtrough and a first run copy the same data in aprox. 7 minutes. Still 2 times slower than the native connection. I also did another messure and the copy between partitions on the same VM takes 3 minutes.

    Read the article

  • Intermittent 403 errors when using allow to limit access to url with both explicit IP and SetEnvIf

    - by rbieber
    We are running Apache 2.2.22 on a Solaris 10 environment. We have a specific URL that we want to limit access to by IP. We recently implemented a CDN and now have the added complexity that the IP's that a request are shown to be coming from are actually the CDN servers and not the ultimate end user. In the case that we need to back the CDN out, we want to handle the case where either the CDN is forwarding the request, or the ultimate client is sending the request directly. The CDN sends the end user IP address in an HTTP header (for this scenario that header is called "User-IP"). Here is the configuration that we have put in place: SetEnvIf User-IP (\d+\.\d+\.\d+\.\d+) REAL_USER_IP=$1 SetEnvIf REAL_USER_IP "(10\.1\.2\.3|192\.168\..+)" access_allowed=1 <Location /uri/> Order deny,allow Allow from 10.1.2.3 192.168. allow from env=access_allowed Deny from all </Location> This seems to work fine for a time, however at some point the web server starts serving 403 errors to the end user - so for some reason it is restricting access. The odd thing is that a bounce of the web server seems to resolve the issue, but only for a time - then the behavior comes back. It might be worthwhile to note as well that this URL is delegated to a JBoss server via mod_jk. The denial of access is, however; confirmed to be at the Apache layer and the issue only seems to happen after the server has been running for some time.

    Read the article

  • How to ensure local file is up-to-date or ahead (dropbox sync) before truecrypt auto-mount it?

    - by user620965
    There are a lot tutorials out there that states that dropbox build-in encryption is not secure enought. That tutorials recommands to sync a truecrypt container file to have all files in it securely encrypted. This setup is know to be limited. You can NOT have that truecrypt container file mounted on the same time on more than one location - if you have inserted changes to the contents of the container in more then one location at a time then this setup produces a conflict on the container file in the dropbox system - resulting in one container file for each location. In my case that issue is not relevant - i do not use my data on more than one location at a time. I want to use the auto-mount feature of truecrypt on startup of windows 7 to have a zero configuration environment - and start working right away. But i want to ensure that the local truecrypt container file is up-to-date before truecrypt mounts it automatically - imagine you updated the contents of the container on your primary location and your secondary location was off for a long time. In that case it can take "a long time" till dropbox sync is complete (e.g. depending on your internet connection and the size of the container file). There is a option in truecrypt that ensures that truecrypt do not update the timestamp of the container file - which speeds up the sync, because dropbox client is doing a differential sync then instead of a time consuming full-sync. That is an improvement to that setup, but this do not fix my issue. The question is how to make the auto-mount function wait for the container file to be up-to-date (updated by dropbox)? In contrast: if the file was changed local, but remote file (in the dropbox cloud system) is still old (not jet updated by the sync process / or process is progress), should not make truecrypt to wait for the sync. Suggestions?

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • jQuery - Why editable-select list plugin doesn't work with latest jQuery?

    - by Binyamin
    Why editable-select list plugin<select><option>value</option>doesn't work with latest jQuery? editable-select code: /** * Copyright (c) 2009 Anders Ekdahl (http://coffeescripter.com/) * Dual licensed under the MIT (http://www.opensource.org/licenses/mit-license.php) * and GPL (http://www.opensource.org/licenses/gpl-license.php) licenses. * * Version: 1.3.1 * * Demo and documentation: http://coffeescripter.com/code/editable-select/ */ (function($) { var instances = []; $.fn.editableSelect = function(options) { var defaults = { bg_iframe: false, onSelect: false, items_then_scroll: 10, case_sensitive: false }; var settings = $.extend(defaults, options); // Only do bg_iframe for browsers that need it if(settings.bg_iframe && !$.browser.msie) { settings.bg_iframe = false; }; var instance = false; $(this).each(function() { var i = instances.length; if(typeof $(this).data('editable-selecter') == 'undefined') { instances[i] = new EditableSelect(this, settings); $(this).data('editable-selecter', i); }; }); return $(this); }; $.fn.editableSelectInstances = function() { var ret = []; $(this).each(function() { if(typeof $(this).data('editable-selecter') != 'undefined') { ret[ret.length] = instances[$(this).data('editable-selecter')]; }; }); return ret; }; var EditableSelect = function(select, settings) { this.init(select, settings); }; EditableSelect.prototype = { settings: false, text: false, select: false, wrapper: false, list_item_height: 20, list_height: 0, list_is_visible: false, hide_on_blur_timeout: false, bg_iframe: false, current_value: '', init: function(select, settings) { this.settings = settings; this.select = $(select); this.text = $('<input type="text">'); this.text.attr('name', this.select.attr('name')); this.text.data('editable-selecter', this.select.data('editable-selecter')); // Because we don't want the value of the select when the form // is submitted this.select.attr('disabled', 'disabled'); var id = this.select.attr('id'); if(!id) { id = 'editable-select'+ instances.length; }; this.text.attr('id', id); this.text.attr('autocomplete', 'off'); this.text.addClass('editable-select'); this.select.attr('id', id +'_hidden_select'); this.initInputEvents(this.text); this.duplicateOptions(); this.positionElements(); this.setWidths(); if(this.settings.bg_iframe) { this.createBackgroundIframe(); }; }, duplicateOptions: function() { var context = this; var wrapper = $(document.createElement('div')); wrapper.addClass('editable-select-options'); var option_list = $(document.createElement('ul')); wrapper.append(option_list); var options = this.select.find('option'); options.each(function() { if($(this).attr('selected')) { context.text.val($(this).val()); context.current_value = $(this).val(); }; var li = $('<li>'+ $(this).val() +'</li>'); context.initListItemEvents(li); option_list.append(li); }); this.wrapper = wrapper; this.checkScroll(); }, checkScroll: function() { var options = this.wrapper.find('li'); if(options.length > this.settings.items_then_scroll) { this.list_height = this.list_item_height * this.settings.items_then_scroll; this.wrapper.css('height', this.list_height +'px'); this.wrapper.css('overflow', 'auto'); } else { this.wrapper.css('height', 'auto'); this.wrapper.css('overflow', 'visible'); }; }, addOption: function(value) { var li = $('<li>'+ value +'</li>'); var option = $('<option>'+ value +'</option>'); this.select.append(option); this.initListItemEvents(li); this.wrapper.find('ul').append(li); this.setWidths(); this.checkScroll(); }, initInputEvents: function(text) { var context = this; var timer = false; $(document.body).click( function() { context.clearSelectedListItem(); context.hideList(); } ); text.focus( function() { // Can't use the blur event to hide the list, because the blur event // is fired in some browsers when you scroll the list context.showList(); context.highlightSelected(); } ).click( function(e) { e.stopPropagation(); context.showList(); context.highlightSelected(); } ).keydown( // Capture key events so the user can navigate through the list function(e) { switch(e.keyCode) { // Down case 40: if(!context.listIsVisible()) { context.showList(); context.highlightSelected(); } else { e.preventDefault(); context.selectNewListItem('down'); }; break; // Up case 38: e.preventDefault(); context.selectNewListItem('up'); break; // Tab case 9: context.pickListItem(context.selectedListItem()); break; // Esc case 27: e.preventDefault(); context.hideList(); return false; break; // Enter, prevent form submission case 13: e.preventDefault(); context.pickListItem(context.selectedListItem()); return false; }; } ).keyup( function(e) { // Prevent lots of calls if it's a fast typer if(timer !== false) { clearTimeout(timer); timer = false; }; timer = setTimeout( function() { // If the user types in a value, select it if it's in the list if(context.text.val() != context.current_value) { context.current_value = context.text.val(); context.highlightSelected(); }; }, 200 ); } ).keypress( function(e) { if(e.keyCode == 13) { // Enter, prevent form submission e.preventDefault(); return false; }; } ); }, initListItemEvents: function(list_item) { var context = this; list_item.mouseover( function() { context.clearSelectedListItem(); context.selectListItem(list_item); } ).mousedown( // Needs to be mousedown and not click, since the inputs blur events // fires before the list items click event function(e) { e.stopPropagation(); context.pickListItem(context.selectedListItem()); } ); }, selectNewListItem: function(direction) { var li = this.selectedListItem(); if(!li.length) { li = this.selectFirstListItem(); }; if(direction == 'down') { var sib = li.next(); } else { var sib = li.prev(); }; if(sib.length) { this.selectListItem(sib); this.scrollToListItem(sib); this.unselectListItem(li); }; }, selectListItem: function(list_item) { this.clearSelectedListItem(); list_item.addClass('selected'); }, selectFirstListItem: function() { this.clearSelectedListItem(); var first = this.wrapper.find('li:first'); first.addClass('selected'); return first; }, unselectListItem: function(list_item) { list_item.removeClass('selected'); }, selectedListItem: function() { return this.wrapper.find('li.selected'); }, clearSelectedListItem: function() { this.wrapper.find('li.selected').removeClass('selected'); }, pickListItem: function(list_item) { if(list_item.length) { this.text.val(list_item.text()); this.current_value = this.text.val(); }; if(typeof this.settings.onSelect == 'function') { this.settings.onSelect.call(this, list_item); }; this.hideList(); }, listIsVisible: function() { return this.list_is_visible; }, showList: function() { this.wrapper.show(); this.hideOtherLists(); this.list_is_visible = true; if(this.settings.bg_iframe) { this.bg_iframe.show(); }; }, highlightSelected: function() { var context = this; var current_value = this.text.val(); if(current_value.length < 0) { if(highlight_first) { this.selectFirstListItem(); }; return; }; if(!context.settings.case_sensitive) { current_value = current_value.toLowerCase(); }; var best_candiate = false; var value_found = false; var list_items = this.wrapper.find('li'); list_items.each( function() { if(!value_found) { var text = $(this).text(); if(!context.settings.case_sensitive) { text = text.toLowerCase(); }; if(text == current_value) { value_found = true; context.clearSelectedListItem(); context.selectListItem($(this)); context.scrollToListItem($(this)); return false; } else if(text.indexOf(current_value) === 0 && !best_candiate) { // Can't do return false here, since we still need to iterate over // all list items to see if there is an exact match best_candiate = $(this); }; }; } ); if(best_candiate && !value_found) { context.clearSelectedListItem(); context.selectListItem(best_candiate); context.scrollToListItem(best_candiate); } else if(!best_candiate && !value_found) { this.selectFirstListItem(); }; }, scrollToListItem: function(list_item) { if(this.list_height) { this.wrapper.scrollTop(list_item[0].offsetTop - (this.list_height / 2)); }; }, hideList: function() { this.wrapper.hide(); this.list_is_visible = false; if(this.settings.bg_iframe) { this.bg_iframe.hide(); }; }, hideOtherLists: function() { for(var i = 0; i < instances.length; i++) { if(i != this.select.data('editable-selecter')) { instances[i].hideList(); }; }; }, positionElements: function() { var offset = this.select.offset(); offset.top += this.select[0].offsetHeight; this.select.after(this.text); this.select.hide(); this.wrapper.css({top: offset.top +'px', left: offset.left +'px'}); $(document.body).append(this.wrapper); // Need to do this in order to get the list item height this.wrapper.css('visibility', 'hidden'); this.wrapper.show(); this.list_item_height = this.wrapper.find('li')[0].offsetHeight; this.wrapper.css('visibility', 'visible'); this.wrapper.hide(); }, setWidths: function() { // The text input has a right margin because of the background arrow image // so we need to remove that from the width var width = this.select.width() + 2; var padding_right = parseInt(this.text.css('padding-right').replace(/px/, ''), 10); this.text.width(width - padding_right); this.wrapper.width(width + 2); if(this.bg_iframe) { this.bg_iframe.width(width + 4); }; }, createBackgroundIframe: function() { var bg_iframe = $('<iframe frameborder="0" class="editable-select-iframe" src="about:blank;"></iframe>'); $(document.body).append(bg_iframe); bg_iframe.width(this.select.width() + 2); bg_iframe.height(this.wrapper.height()); bg_iframe.css({top: this.wrapper.css('top'), left: this.wrapper.css('left')}); this.bg_iframe = bg_iframe; } }; })(jQuery); $(function() { $('.editable-select').editableSelect( { bg_iframe: true, onSelect: function(list_item) { alert('List item text: '+ list_item.text()); // 'this' is a reference to the instance of EditableSelect // object, so you have full access to everything there // alert('Input value: '+ this.text.val()); }, case_sensitive: false, // If set to true, the user has to type in an exact // match for the item to get highlighted items_then_scroll: 10 // If there are more than 10 items, display a scrollbar } ); var select = $('.editable-select:first'); var instances = select.editableSelectInstances(); // instances[0].addOption('Germany, value added programmatically'); });

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >