Search Results

Search found 23968 results on 959 pages for 'tail call'.

Page 130/959 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • curl to itself behind firewall

    - by xtreaming
    I have a server A which is configured behind a firewall and has 30.x.x.x public adress and 172.x.x.x internal address. I'm trying to make a php Curl call from a script located on that server, to the 30.x.x.x external IP of that server but the curl call cannot be resolved. It seems that server A does not have a route to that IP. Did you encounter any similar situations? Any chance to solve it through static routes?

    Read the article

  • SEO Hosting & the Importance of C Class IP Blocks

    The era of Pagerank is not dead and link popularity still counts towards the overall ranking of a website in any industry vertical. Long tail of search still gets powered from on-page optimization but for most of the traffic bearing terms, search engines hardly go in for the text databases.

    Read the article

  • SEO Copywriting - Embracing Google's Mayday Update

    SEO copywriting has changed dramatically over the past two or three years. Then, it was all meta tags and keyword density. Now, SEO copywriting is more about quality inbound links and useful content that reads smoothly. Google's 2010 Mayday algorithm update also emphasises quality content at the expense of 'long-tail keywords' whose demise is spelt in a single, simple term: 'irrelevance'.

    Read the article

  • When it Comes to SEO, Length Matters

    Search engine optimizing your website can sometimes seem counter-intuitive. The common misconception is to try and associate every potential keyword with your website. Wrong! It's not about quantity, it's about quality long tail phrases. Here's what you need to know.

    Read the article

  • Lync 2010, Kamailio, & Trixbox 2.6.23 (Asterisk 1.4)

    - by slashp
    I'm having an issue trying to connect Lync 2010 phone calls with our trixbox PBX. I've gotten to the point where Kamailio seems to be functioning properly and acting as a bridge between TCP traffic (from Lync) & UDP traffic (to the trixbox, as Asterisk 1.4 does not support SIP over TCP). Our Lync box IP: 10.100.10.41 Our Kamailio box IP: 10.100.10.44 Our trixbox IP: 10.100.10.2 The issue I'm running into is as follows when enabling SIP debugging for the Kamailio box: <--- SIP read from 10.100.10.44:5060 ---> PRACK sip:TNECLTSLY01.contoso.com:5068;transport=Tcp;maddr=10.100.10.41 SIP/2.0 FROM: <sip:9121;[email protected];user=phone>;epid=CF2380792B;tag=4852bab430 TO: <sip:[email protected];user=phone>;epid=CF2380792B;tag=3684a6a24e CSEQ: 24 PRACK CALL-ID: 192daae6-00e1-4140-bddd-0394b35d475b MAX-FORWARDS: 70 Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d VIA: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK159fc989 CONTACT: <sip:TNECLTSLY01.contoso.com:5068;transport=Tcp;maddr=10.100.10.41> CONTENT-LENGTH: 0 USER-AGENT: RTCC/4.0.0.0 MediationServer RAck: 1 23 INVITE <-------------> --- (12 headers 0 lines) --- Sending to 10.100.10.44 : 5060 (NAT) <--- Transmitting (NAT) to 10.100.10.44:5060 ---> SIP/2.0 481 Call leg/transaction does not exist Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d;received=10.100.10.44 Via: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK159fc989 From: <sip:9121;[email protected];user=phone>;epid=CF2380792B;tag=4852bab430 To: <sip:[email protected];user=phone>;epid=CF2380792B;tag=3684a6a24e Call-ID: 192daae6-00e1-4140-bddd-0394b35d475b CSeq: 24 PRACK User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 <------------> trixbox1*CLI> <--- SIP read from 10.100.10.44:5060 ---> ACK sip:[email protected];user=phone SIP/2.0 FROM: "John Jones"<sip:9121;[email protected];user=phone>;tag=4852bab430;epid=CF2380792B TO: <sip:[email protected];user=phone>;tag=3684a6a24e;epid=CF2380792B CSEQ: 23 ACK CALL-ID: 192daae6-00e1-4140-bddd-0394b35d475b MAX-FORWARDS: 70 Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d VIA: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK79a21c CONTENT-LENGTH: 0 My SIP trunk on the trixbox looks like this: [from-lync] exten => _+4XXX!,1,Noop(Stripping + from start of number) exten => _+4XXX!,n,Goto(from-internal,${EXTEN:1}) Though I am still having no luck getting the + stripped or the call to go through. Any ideas would be greatly appreciated. Thank you! -slashp

    Read the article

  • Updating Windows DNS records from a remote windows DNS server

    - by Luckyboy
    Does anyone know if it is possible for a windows 2003 DNS server to update the records for a domain so that it contains all the records of a domain of of a remotely based DNS server? Im almost certain that doesn't quite explain the problem so I shall illustrate with an example: We have two offices, both are based about 100 miles apart. One deals with IT (Intranet development etc.) while the other is a call centre that uses the Intranet systems. Currently each office has its own DNS server, with the IT office's and call centre's DNS servers containing entries for intranet site. The difference is that the IT DNS server records point to the various servers that host the Intranet sites (e.g. intranetsite1 - 192.168.1.10, intranetsite2 - 192.168.1.11) while all of the entries in the call centre's DNS point to the IT office's DNS server (intranetsite1 - [it office ip address], intranetsite2 - [it office ip address]). Is there any way that the call centre's DNS server could automatically add all DNS records hosted by the IT office's DNS, translating the IP addresses to the IP address of the IT office?

    Read the article

  • Remote Desktop Connection Only Works One Way

    - by advocate
    I can't get my desktop to connect to my laptop through remote desktop connection. Unfortunately I can only get my laptop to connect to my desktop (quite useless). Desktop: Windows 7 Ultimate 64 Bit SP1 Windows firewall is off for all 3 profiles (domain / private / public) Remote desktop connection is installed and set to allow all connections Under running services is: Running Remote Desktop Configuration Running Remote Desktop Services Running Remote Desktop Services UserMode Port Redirector Running Remote Procedure Call (RPC) Stopped Remote Access Auto Connection Manager Stopped Remote Access Connection Manager Stopped Remote Procedure Call (RPC) Locator Stopped Remote Registry Stopped Routing and Remote Access Stopped Windows Remote Management (WS-Management) Laptop: Windows 7 Home Premium 64 Bit SP1 Windows firewall is off for all3 profiles (domain / private / public) Remote desktop connection is installed and set to 'Allow Remote Assistance connections to this computer' Under running services is: Running Remote Procedure Call (RPC) Stopped Remote Access Auto Connection Manager Stopped Remote Access Connection Manager Stopped Remote Desktop Configuration Stopped Remote Desktop Services Stopped Remote Procedure Call (RPC) Locator Stopped Remote Registry Stopped Routing and Remote Access Stopped Windows Remote Management (WS-Management) It should be noted that the Laptop that I'm trying to connect to is an Alienware and might be running some wonky Dell settings. Also, the settings are slightly different for remote desktop connection as it's a Home edition of Windows and not Ultimate like my desktop. Finally, both computers are on the same Homegroup so that RDC can be accessed by one click through the network section of Windows. They're also on the same workgroup, MSHOME, just to see if that helps.

    Read the article

  • AsteriskNow Migration / Shared Extension Space

    - by Aaron C. de Bruyn
    I am testing the possibility of migrating from an old Avaya phone system to AsteriskNow. The migration would cover several hundred phones--but spread out over several years. (Management wants to move buildings to the new phone system one by one as cables get cut or time permits.) Two other directive is that extensions must not change and they want a GUI that other admins (non-Linux geeks) can manage. They currently use 9XXX for all extensions. We linked the Avaya and Asterisk box via PRI card and they both are communicating. From the Avaya side, if we move (for example) extension 9001 to Asterisk, we forward the call over the PRI to the AsteriskNow box and the SIP phone rings. In AsteriskNow we have an outgoing rule '_9XXX' that routes all 4-digit extensions starting with 9 back to Avaya. Here's the trouble. Dialing 9001 (the extension moved over to AsteriskNow) causes the call to be routed out the PRI to the Avaya box, then the Avaya box routes the call back to Asterisk, and Asterisk routes it to the SIP phone. As we get more and more users switched over, it will use up more and more channels over the PRI card. Is there a way I can ask Asterisk to check it's local extensions first--then forward off to the Avaya system if it starts with '_9XXX'? (I know how I can do it when editing the raw config files, I'm just looking for a way to do it in the GUI so other admins can manage it if necessary.) As a last-ditch plan, I know I can specifically add '_9001' as an outgoing call rule and sent it directly to extension 9001--but I'd really hate to do that for several hundred phones

    Read the article

  • How can I automatically restart Apache and Varnish if can't fetch a file?

    - by Tyler
    I need to restart Apache and Varnish and email some logs when the script can't fetch robots.txt but I am getting an error ./healthcheck: 43 [[: not found My server is Ubuntu 12.04 64-bit #!/bin/sh # Check if can fetch robots.txt if not then restart Apache and Varnish # Send last few lines of logs with date via email PATH=/bin:/usr/bin THEDIR=/tmp/web-server-health [email protected] mkdir -p $THEDIR if ( wget --timeout=30 -q -P $THEDIR http://website.com/robots.txt ) then # we are up touch ~/.apache-was-up else # down! but if it was down already, don't keep spamming if [[ -f ~/.apache-was-up ]] then # write a nice e-mail echo -n "Web server down at " > $THEDIR/mail date >> $THEDIR/mail echo >> $THEDIR/mail echo "Apache Log:" >> $THEDIR/mail tail -n 30 /var/log/apache2/error.log >> $THEDIR/mail echo >> $THEDIR/mail echo "AUTH Log:" >> $THEDIR/mail tail -n 30 /var/log/auth.log >> $THEDIR/mail echo >> $THEDIR/mail # kick apache echo "Now kicking apache..." >> $THEDIR/mail /etc/init.d/varnish stop >> $THEDIR/mail 2>&1 killall -9 varnishd >> $THEDIR/mail 2>&1 /etc/init.d/varnish start >> $THEDIR/mail 2>&1 /etc/init.d/apache2 stop >> $THEDIR/mail 2>&1 killall -9 apache2 >> $THEDIR/mail 2>&1 /etc/init.d/apache2 start >> $THEDIR/mail 2>&1 # prepare the mail echo >> $THEDIR/mail echo "Good luck troubleshooting!" >> $THEDIR/mail # send the mail sendemail -o message-content-type=html -f [email protected] -t $EMAIL -u ALARM -m < $THEDIR/mail rm ~/.apache-was-up fi fi rm -rf $THEDIR

    Read the article

  • Xen domU passwd file overwritten with console log output

    - by malfy
    I was setting up a Debian Xen domU and after booting it fine, I added basic configuration to /etc/network/interfaces and ran /etc/init.d/networking restart. This failed so I decided to reboot. After the reboot I also ran xm shutdown box. When dropped to a shell prompt it wouldn't let me login. Upon further inspection, I now have garbage in some critical files in /etc: root@box:/# tail +1 mnt/etc/{passwd-,shadow} tail: cannot open `+1' for reading: No such file or directory ==> mnt/etc/passwd- <== 0000000000100000 (reserved) Nov 23 02:02:39 box kernel: [ 0.000000] Xen: 0000000000100000 - 0000000004000000 (usable) Nov 23 02:02:39 box kernel: [ 0.000000] DMI not present or invalid. Nov 23 02:02:39 box kernel: [ 0.000000] last_pfn = 0x4000 max_arch_pfn = 0x1000000 Nov 23 02:02:39 box kernel: [ 0.000000] initial memory mapped : 0 - 033ff000 Nov 23 02:02:39 box kernel: [ 0.000000] init_memory_mapping: 0000000000000000-0000000004000000 Nov 23 02:02:39 box kernel: [ 0.000000] NX (Execute Disable) protection: active Nov 23 02:02:39 box kernel: [ 0.000000] 0000000000 - 0004000000 page 4k Nov 23 02:02:39 box kernel: [ 0.000000] kernel direct mapping tables up to 4000000 @ 7000-2c000 Nov 23 02:02:3 ==> mnt/etc/shadow <== 32 nr_cpumask_bits:32 nr_cpu_ids:1 nr_node_ids:1 Nov 23 02:02:39 box kernel: [ 0.000000] PERCPU: Embedded 15 pages/cpu @c15b0000 s37688 r0 d23752 u65536 Nov 23 02:02:39 box kernel: [ 0.000000] pcpu-alloc: s37688 r0 d23752 u65536 alloc=16*4096 Nov 23 02:02:39 box kernel: [ 0.000000] pcpu-alloc: [0] 0 Nov 23 02:02:39 box kernel: [ 0.000000] Xen: using vcpu_info placement Nov 23 02:02:39 box kernel: [ 0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 16160 Nov 23 02:02:39 box kernel: [ 0.000000] Kernel command line: root=/dev/mapper/xen-guest_root ro quiet root=/dev/xvda1 ro Nov 23 02:02:39 box kernel: [ 0.000000] PID hash table entries: The garbage is also present in the passwd file and the group file (although I didn't paste that above since I have since ran debootstrap on the filesystem again). Does anyone have any insight into what happened and why?

    Read the article

  • OpenVZ Can't initialize containers after install

    - by Tonino Jankov
    I have installed OpenVZ on centos 6 on a dedicated server. I followed quick installation guide on openvz wiki. After installing thru yum, I don't know why, but grub.conf wasn't automatically updated to accomodate new kernel, so I had to do it manually. I edited grub.conf, added openvz kernel and rebooted - it went fine. Server went up into openvz kernel and it worked, it started openvz service byitself. But after I created a container, added IP to it and attempted to start it, I couldn't. Here is the output from the shell: [root@cloud2 ~]# vzctl start 86 Starting container ... Container is mounted Container start failed (try to check kernel messages, e.g. "dmesg | tail") Container is unmounted [root@cloud2 ~]# dmesg | tail [ 1973.401596] CT: 86: failed to start with err=-105 [ 2107.113850] Failed to initialize the ICMP6 control socket (err -105). [ 2107.155523] CT: 86: stopped [ 2107.155543] CT: 86: failed to start with err=-105 [ 6348.282184] Failed to initialize the ICMP6 control socket (err -105). [ 6348.330348] CT: 86: stopped [ 6348.330361] CT: 86: failed to start with err=-105 [45184.024002] Failed to initialize the ICMP6 control socket (err -105). [45184.072086] CT: 86: stopped [45184.072099] CT: 86: failed to start with err=-105 [root@cloud2 ~]# I don't know what is wrong. I tried different templates, debian 6, centos 6, i386, amd64, but the issue is the same. What is the problem?

    Read the article

  • how i can identify which process is making UDP traffic on linux?

    - by boos
    my machine is continously making udp dns traffic request. what i need to know is the PID of the process generating this traffic. The normal way in TCP connection is to use netstat/lsof and get the process associated at the pid. Is UDP the connection is stateles, so, when i call netastat/lsof i can see it only if the UDP socket is opened and it's sending traffic. I have tried with lsof -i UDP and with nestat -anpue but i cant be able to find wich process is doing that request because i need to call lsof/netstat exactly when the udp traffic is sended, if i call lsof/netstat before/after the udp datagram is sended is impossible to view the opened UDP socket. call netstat/lsof exactly when 3/4 udp packet is sended is IMPOSSIBLE. how i can identify the infamous process ? I have already inspected the traffic to try to identify the sended PID from the content of the packet, but is not possible to identify it from the contect of the traffic. anyone can help me ? I'm root on this machine FEDORA 12 Linux noise.company.lan 2.6.32.16-141.fc12.x86_64 #1 SMP Wed Jul 7 04:49:59 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • How to use Timer broadcast on Multi-Processor system with linux 3.10?

    - by kevin.ji
    Hardware: ARM Cortex-A9 * 2 Software: linux-3.10.0 The platform has 2 cores of arm cortex-a9. Item CONFIG_LOCAL_TIMERS is not set in linux menuconfig. I want to use only one hardware timer to supply tick for all cpu. Interrupts looks like: CPU0 CPU1 57: 6697 0 GIC timer 81: 213 0 GIC uart-pl011 103: 0 0 GIC gmac0 104: 0 0 GIC gmac1 IPI0: 0 1 CPU wakeup interrupts IPI1: 0 0 Timer broadcast interrupts IPI2: 967 866 Rescheduling interrupts IPI3: 0 0 Function call interrupts IPI4: 1 2 Single function call interrupts IPI5: 0 0 CPU stop interrupts IPI6: 0 0 CPU backtrace Err: 0 Timer broadcast interrupts counter does not add. And it looks like that cpu1 does not work at all.But this method works well with linux-3.4, and the interrupt info looks as below in linux-3.4: # cat /proc/interrupts CPU0 CPU1 57: 8596 0 GIC timer 81: 91 0 GIC uart-pl011 103: 0 0 GIC gmac0 104: 0 0 GIC gmac1 IPI0: 0 8560 Timer broadcast interrupts IPI1: 884 1020 Rescheduling interrupts IPI2: 0 0 Function call interrupts IPI3: 0 6 Single function call interrupts IPI4: 0 0 CPU stop interrupts IPI5: 0 0 CPU backtrace Err: 0 The count of Timer broadcast interrupts is adding. And all of cpus work well. I don't know why. Any answer is welcome. :)

    Read the article

  • How can I mount dd image of a partition?

    - by Puneet Arora
    I created a dd image of a partition (containing an HFS+ FS) of one of my disks (and not the entire disk) a few days ago using the following command - dd conv=sync,noerror bs=8k if=/dev/sdc2 of=/path/to/img How can I mount it? I tried the following but it doesn't work - mount -o loop,ro -t hfsplus /path/to/img /path/to/mntDir It gives me mount: wrong fs type, bad option, bad superblock on /dev/loop1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg | tail gives me - [5248455.568479] hfs: invalid secondary volume header [5248455.568494] hfs: unable to find HFS+ superblock [5248462.674836] hfs: invalid secondary volume header [5248462.674843] hfs: unable to find HFS+ superblock [5248550.672105] hfs: invalid secondary volume header [5248550.672115] hfs: unable to find HFS+ superblock [5248993.612026] hfs: unable to find HFS+ superblock [5248998.103385] hfs: unable to find HFS+ superblock [5249031.441359] hfs: unable to find HFS+ superblock [5249036.274864] hfs: unable to find HFS+ superblock Is there something wrong that I am doing? I tried searching on how to do this but all the results I get only talk about mounting a partition from within a full disk image, using the offset option with mount - none talk about the case where the image itself is that of a partition. Thanks. PS: I'm running 64bit Arch Linux, and the partition from the original disk /dev/sdc2 mounts fine.

    Read the article

  • su not giving proper message for restricted LDAP groups

    - by user1743881
    I have configured PAM authentication on Linux box to restrict particular group only to login. I have enabled pam and ldap through authconfig and modified access.conf like below, [root@test root]# tail -1 /etc/security/access.conf - : ALL EXCEPT root test-auth : ALL Also modified sudoers file, to get su for this group <code> [root@test ~]# tail -1 /etc/sudoers %test-auth ALL=/bin/su</code> Now, only this ldap group members can login to system. However when from any of this authorized user, I tried for su, it asks for password and then though I enter correct password it gives message like Incorrect password and login failed. /var/log/secure shows that user is not having permission to get the access, but then it should print message like Access denied.The way it prints for console login. My functionality is working but its no giving proper messages. Could anyone please help on this. My /etc/pam.d/su file, [root@test root]# cat /etc/pam.d/su #%PAM-1.0 auth sufficient pam_rootok.so # Uncomment the following line to implicitly trust users in the "wheel" group. #auth sufficient pam_wheel.so trust use_uid # Uncomment the following line to require a user to be in the "wheel" group. #auth required pam_wheel.so use_uid auth include system-auth account sufficient pam_succeed_if.so uid = 0 use_uid quiet account include system-auth password include system-auth session include system-auth session optional pam_xauth.so

    Read the article

  • Why does this loopback device creation malfunction?

    - by user50118
    The stackoverflow people thought this was more appropriate here, I put it there as it is part of a program but I can see their POV, so here it is: At the bottom of the code you can see it failing. In fact, I'll put it here at the start too because it is the problem I need to solve: [350591.924819] EXT4-fs (loop0): bad geometry: block count 9750806 exceeds size of device (9750168 blocks) I don't understand why the device is supposedly too small. I made this partition two days ago with normal fdisk, it was created and formatted with ext4 supplying no options other than the partition (/dev/sdb2) to format. The only explaination I can think of is that ext4 has the size of the partition wrong somehow but that seems very unlikely. What is wrong with my math? The offset is correct, you can see that with the file command, and the size should be correct too because End - Start comes to the same number of sectors minus 1, just like it should (A disk starting on sector 1 and ending on sector 2 would be 2 - 1 = 1 and have two sectors). # sfdisk -luS /dev/sdb Disk /dev/sdb: 9729 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sdb2 78295040 156296384 78001345 83 Linux # losetup -r -f --show -o $((78295040 * 512)) --sizelimit $((78001345 * 512)) /dev/sdb /dev/loop0 # file -s /dev/loop0 /dev/loop0: Linux rev 1.0 ext4 filesystem data (needs journal recovery) (extents) (large files) (huge files) # mount -o ro -t ext4 /dev/loop0 /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # dmesg | tail -n 1 [350591.924819] EXT4-fs (loop0): bad geometry: block count 9750806 exceeds size of device (9750168 blocks)

    Read the article

  • Running python script in incrontab in Debian

    - by WilliamMayor
    I have a user, dropbox, that runs the Dropbox daemon, I want to monitor the directories in the Dropbox directory for new files and run a python script when they appear. I have the python script that I know works: $ /home/dropbox/monitor.py Trying to get lock Got lock, waiting for Dropbox to be idle Dropbox idle Finding instructions Done, releasing lock I have an incrontab entry: $ incrontab -l /home/dropbox/Dropbox IN_CREATE /home/dropbox/monitor.py | logger /home/dropbox/test IN_CREATE logger "$$ $@ $# $% $&" When I add a file to the test directory I see the output in /var/log/syslog: $ touch /home/dropbox/test/a $ tail /var/log/syslog ... Nov 9 10:18:27 vps incrond[1354]: (dropbox) CMD (logger "$ /home/dropbox/test a IN_CREATE 256") Nov 9 10:18:27 vps logger: "$ /home/dropbox/test a IN_CREATE 256" ... However, when I add a file to the Dropbox directory the command doesn't seem to run: $ touch /home/dropbox/Dropbox/a $ tail /var/log/syslog ... Nov 9 10:24:16 vps incrond[1354]: (dropbox) CMD (/home/dropbox/monitor.py | logger) ... So the incron daemon notices the new file and the correct command is found to be executed but it never actually gets executed. Nor are there any error messages. It kind of seems like incrontab can only be used to run the most simple of commands. This might be a similar question to: Incrond running but not executing commands CentOS 6.4 but I think that I don't have env problems, every path is absolute. I tried changing .../monitor.py to /usr/bin/python2.7 .../monitor.py just in case but it didn't make any difference.

    Read the article

  • Why am I getting this error in the logs?

    - by Matt
    Ok so I just started a new ubuntu server 11.10 and i added the vhost and all seems ok ...I also restarted apache but when i visit the browser i get a blank page the server ip is http://23.21.197.126/ but when i tail the log tail -f /var/log/apache2/error.log [Wed Feb 01 02:19:20 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs [Wed Feb 01 02:19:24 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs but my only file in sites-enabled is this <VirtualHost 23.21.197.126:80> ServerAdmin [email protected] ServerName logicxl.com # ServerAlias DocumentRoot /srv/crm/current/public ErrorLog /srv/crm/logs/error.log <Directory "/srv/crm/current/public"> Order allow,deny Allow from all </Directory> </VirtualHost> is there something i am missing .....the document root should be /srv/crm/current/public and not /etc/apache2/htdocs as the error suggests Any ideas on how to fix this UPDATE sudo apache2ctl -S VirtualHost configuration: 23.21.197.126:80 is a NameVirtualHost default server logicxl.com (/etc/apache2/sites-enabled/crm:1) port 80 namevhost logicxl.com (/etc/apache2/sites-enabled/crm:1) Syntax OK UPDATE <VirtualHost *:80> ServerAdmin [email protected] ServerName logicxl.com DocumentRoot /srv/crm/current/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /srv/crm/current/public/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • Error when installing wubi on windows 7

    - by P'sao
    Im installing ubuntu on windows 7(wubi 11.10): when its nearly done it gives me this error in the log file: Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 461, in expand_diskimage File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] 10-25 20:31 DEBUG TaskList: # Cancelling tasklist 10-25 20:31 DEBUG TaskList: # Finished tasklist 10-25 20:31 ERROR root: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 461, in expand_diskimage File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] can some one help me?

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • Can not mount my USB disk-- Ubuntu nor windows[dmesg including]

    - by EthanZ6174
    first, here is my dmesn|tail result right after i plugged the disk: $ dmesg | tail [ 2578.697224] scsi 6:0:0:0: Direct-Access HP v100w PMAP PQ: 0 ANSI: 0 CCS [ 2578.698322] sd 6:0:0:0: Attached scsi generic sg2 type 0 [ 2578.916464] sd 6:0:0:0: [sdb] 3921920 512-byte logical blocks: (2.00 GB/1.87 GiB) [ 2578.916950] sd 6:0:0:0: [sdb] Write Protect is off [ 2578.916956] sd 6:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 2578.916961] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.922460] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.922470] sdb: [ 2578.969570] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.969578] sd 6:0:0:0: [sdb] Attached SCSI removable disk there is nothing after 'sdb:' ... at the meantime, the lsusb shows: $ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 004: ID 03f0:3207 Hewlett-Packard Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 002: ID 045e:0737 Microsoft Corp. Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub so... can anyone help me? what's wrong with my USB disk? THX

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • SQL Server 2012 memory usage steadily growing

    - by pgmo
    I am very worried about the SQL Server 2012 Express instance on which my database is running: the SQL Server process memory usage is growing steadily (1.5GB after only 2 days working). The database is made of seven tables, each having a bigint primary key (Identity) and at least one non-unique index with some included columns to serve the majority of incoming queries. An external application is calling via Microsoft OLE DB some stored procedures, each of which do some calculations using intermediate temporary tables and/or table variables and finally do an upsert (UPDATE....IF @@ROWCOUNT=0 INSERT.....) - I never DROP those temporary tables explicitly: the frequency of those calls is about 100 calls every 5 seconds (I saw that the DLL used by the external application open a connection to SQL Server, do the call and then close the connection for each and every call). The database files are organized in only one filgegroup, recovery type is set to simple. Some questions to diagnose the problem: is that steadily growing memory normal? did I do any mistake in database design which probably lead to this behaviour? (no explicit temp-table drop, filegroup organization, etc) can SQL Server manage such a stored procedure call rate (100 calls every 5 seconds, i.e. 100 upsert every 5 seconds, beyond intermediate calculations)? do the continuous "open connection/do sp call/close connection" pattern disturb SQL Server? is it possible to diagnose what is causing such a memory usage? Perhaps queues of wating requests? (I ran sp_who2, but I didn't see a big amount of orphan connections from the external application) if I restrict the amount of memory which SQL Server is allowed to use, may I sooner or later get into trouble?

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >