Search Results

Search found 20313 results on 813 pages for 'batch size'.

Page 538/813 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • Advanced TSQL training

    - by Dave Ballantyne
    Over the past few years, Ive had it on my to do list to write and deliver and full-scale SQLServer training course and not just an hour long bite size session at user groups and conferences.  To me, SQLServer development is not just knowing and remembering the syntax of commands.  Sometimes I semi-jest that i have “Written a merge statement without looking up the syntax”, but I know from my interactions on and off line that I am far from alone in this.  In any case we have an awesome tool in the internet which is great at looking things up. When developing SQL Server based solutions,  of more importance is knowing the internals of the engine.  SQL Server is a complex piece of software and we need to be able to understand to a fairly low level ( you can always dive deeper ) the choices that it makes and why it makes them in order to deliver performant, reliable, predictable and scalable systems to our customers and end-users. This is the view i shall be taking over two days in March (19th and 20th) in London and ,TBH, one I dont see taken often enough. Early bird discounts are available until 31Dec. Full details of the course and a high level view of the bullet points we shall be covering are available at the Technitrain site ( http://tinyurl.com/TSQLTraining )

    Read the article

  • using "touch" to create directories?

    - by user66732
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • using "touch" to create directories?

    - by user62367
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • Does sphider can be a search engine for intranet ?

    - by garcon1986
    Hello, Sphinx is a kind of search engine but it should be installed on the server. But i can't install it on the server, so i have to find another solution. Actually, i have tested sphider in a small site. But when I want to integrate it in my intranet, it doesn't work. The error code: 1. Retrieving: http://localhost/XXX at 16:44:20. Updated Link To http://localhost/XXX Size of page: 1.54kb. Starting indexing at 16:44:20. Page contains less than 10 words Links found: 1. New links: 1 2. Retrieving: http://localhost/XXX.php at 16:44:20. Unreachable: http 404 Links found: 0. New links: 0 Anyone has ideas? Thanks

    Read the article

  • Why is my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

  • Keyboard shortcut / navigation references

    - by jerryjvl
    I use the mouse much too freely, and my wrists are not thanking me for it. I have been meaning to try and use the keyboard more as my sole means of navigating Windows, but I am having trouble sticking with it because when I need to do something and I cannot find the right shortcut, I grab the mouse and forget to let go of it again. Personally, the main software that I need keyboard reference sheets for would be: Firefox Thunderbird Visual Studio Windows itself But I would encourage more general inclusion of shortcut references in the answers in case anyone else tries to make the same transition I am attempting ;) What I am looking for is reference material that is as comprehensive as possible so that over time I can hopefully learn to do everything with the keyboard and spare my wrists. Bonus points for references that can be printed in a reasonable size so I can keep them next to my machine in hardcopy. I know there is an answer for Windows already: Is there a definitive reference for Windows shortcuts keys?, but I am leaving it in my question in case anyone has a better printable alternative.

    Read the article

  • 3 Headed Display Ubuntu Wallpaper

    - by Kyle Brandt
    I got 3 display working in Linux. I have set it up so I have Xinerama and 3 different X sessions. So my 3 displays are not all the same size, two are 1680x1050 and then the other which is between the two is 2560x1600. So my issue is when I chose span, even if the image is larger than all 3 resolutions combined, the center monitor has bars at the top and the bottom because span scales the image it seems. Is there a way if I bypass the gui to have an image span all three monitors but with no scaling? Urgent! Please help! ;-)

    Read the article

  • Error while mounting home directory on different logical volume

    - by RCola
    I created RAID 5 form 3 hard drives. Formatted as ext4 this raid array. Created VG0 group and lv_home logical volume in LVM. Then I tried to mount default /home directory on lv_home, while trying to mount logical volume lv_home to folder containing user profiles /home, getting error: mount: wrong fs type, bad option, bad superblock on /dev/mapper/VG0-lv_home next is seems to be symbolic link: # file -s /dev/VG0/lv_home /dev/VG0/lv_home: symbolic link to `../mapper/VG0-lv_home' then # file -s /dev/mapper/VG0-lv_home /dev/mapper/VG0-lv_home: data and lvm> pvs PV VG Fmt Attr PSize PFree /dev/md0 VG0 lvm2 a- 2.02g 68.00m lvm> lvdisplay --- Logical volume --- LV Name /dev/VG0/lv_home VG Name VG0 LV UUID WzJus7-2yV8-yhog-Ju1b-TpWH-IIAI-LIutwe LV Write Access read/write LV Status available # open 0 LV Size 1.17 GiB Current LE 300 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:0

    Read the article

  • Is there a program that will show a tree of the differences in two file trees?

    - by Huckle
    In windows I manually back up from time to time by formatting my external drive and copying the contents of my data partition over. Inevitably there is a difference in the number and size of the files copied because of system files, etc. Is there a program that would diff two directories recursively and compile the differences into a nice GUI tree that I could peruse (preferably filter) to ensure that everything I want made it over to the drive? It should only show files that are not in both directories. (Also, please ignore the inadequacy of my backup solution)

    Read the article

  • options for deploying application

    - by terence
    I've created a simple web application, a self-contained tool with a user system. I host it publicly for everyone to use, but I've gotten some requests to allow companies to host the entire application privately on their internal systems. I have no idea what I'm doing - I have no experience with deployment or server stuff. I'm just some person who learned enough JS and PHP to make a tool for my own needs. The application runs with Apache, MySQL, and PHP. What's the best way to package my application to let others run it privately? I'm assuming there's better options than just sending them all the source code. I'd like to find a solution that is: Does not require support to set up (I'm just a single developer without much free time) Easy to configure Easy to update Does there exist some one-size-fits all thing that I can give to someone, they can install it, and bam, now when they go to http://myapplication/ on their intranet, it works? Thanks for your help.

    Read the article

  • Big Excel File Freezing/Running Slowly

    - by ktm5124
    Hi, My co-worker has a very large Excel file (over 7 MB) that suffers from the problems of (A) running slowly (B) taking forever to open/save/close and (C) freezing the computer, requiring a restart. I set the calculations to Manual, and I repaired the file, but the file didn't change in file size and it is still having these problems. My questions are: (1) Is there any way around this problem or is Excel just bad at handling ~7MB files? (2) Would upgrading RAM make a big difference? (3) It's possible that we can't afford to spend the money on a RAM upgrade. Are there are any other ways of mitigating the problem? Thanks.

    Read the article

  • How can I can delete a project in Aperture, but not in Flickr?

    - by jchatard
    I have a Macbook Air as my only computer ; I store my shots on Flickr. The size of the SSD doesn't allow me to keep all my RAW files on my laptop. So I publish the photographs I like in their final state on Flickr with the built-in feature of Aperture. When I hit delete on a project (in Aperture) the message clearly indicates that all photos synched to any service like Facebook, Flickr or any other will be deleted. How can I can around this without the hacky way of unlinking Aperture from Flickr before deleting the project?

    Read the article

  • Get Zipped Logs from a Remote Server

    - by Jonathan
    I am tasked with trying to find a way to download zipped logs from a remote server. There are quite a bit of these logs and they are constantly created. I do have limited ssh access to the remote server and can scp or rsync the files. However, due to the sheer size of these logs file, I do not want to rsync all of them. The logs could get to terabytes and for rsync to compare them may take some time. I only want to get any new file that was created/last updated an hour ago. I also am worried that I will rsync logs that are in the process of being created, so I was thinking to only rsync files that were last modified 3-5 minutes ago. Would anyone be so kind as to help me with such a process? Thank you in advance.

    Read the article

  • How to find out when to increase bit rate? (TCP streaming solution)

    - by Kabumbus
    How to find out when to increase bit rate? (TCP streaming solution) We have a stream with "frames". each "frame" has a "timestamp" . frames have bit rate property which is actually there size. We generate frames with our app and stream them one by one on to our TCP server socket. At the same time server post replies so when after each sent frame we try to read from socket we receive which timestamp is currently on server. if timestamp is lover than previous frame we lover bit rate 20%. Such scheme seems to work giving me one way vbr (lowering) but I wonder how to implement increase? I mean we can always try to increase 5% each frame until some limited desired value but each time we have delay will lose real-time feature of our stream... Generally such scheme is for finding out how much of network stream is currently used by other user apps and give picture of how much server is loaded at the same time so we can stream just right amount of data for all to receive it in real time. So what shall I do to add increase to my scheme? So having current bit rate of A I thought we could add +7% for 3 frames and than one -20% and than if all that 3 frames with +7% came in time we could add 14% to A and repeat circle and it would hopefully not be really noticeable if 2nd frame wold come to us with delay... probably this one is too localised because it is a requirement for me to use TCP.

    Read the article

  • How to fix Black screen?

    - by stupidwhiteguy
    so I recently had my question deleted and merged into a standard how to for blank screens. I am relatively new to This type of computer work and i don't understand the steps nessary to diagnose my problem well enough to solve it so this help full how to has me feeling helpless I can use Ctrl + Alt +F1 and log in So how do I use sudo commands to fix the blank screen on my old dell? sudo lsoci -nn tells me my video card is a ATI rage 128 pro ultra tf /var/log/Xorg.0.log tells me Permission denied that is all i get Sudo apt-get install --reinstall unity tried thay also have also tried Apt-get update and upgrade Please dont close this question without providing an actual answer or if you think it is an exact duplicate provide a solution that worked for that question. I see a lot of these questions as closed and there is no real answer given I will try any solutions available and report results so that others can also solve problems not be overwhelmed by overly broad troubleshooting guides that do nothing to help solve specific issues The nomodeset change from quiet splash also yields no results on reboot I still get a blank screen this screen still has contorl alt f1 abilities but that is it contorl alt f8 causes blinking cursor and F7 gets a crazy flash with green and blue then blank screen Contorl alt f1 a log in prompt in text only when run in recovery mode with failsafe graphics its says the syestem is running in low graphics mode your screen graphics card and input device settings could not be detected correctly. You will need to configure these your self how do i do that? I got this /var/log/failsafeX-backup-120909200641.tar as the location of my log files but i have no idea how to axcess sounds work in blank screen also screen responds or flashes after log in is typed and password entered really any help is good I don't even know where to start I believe 12.04 is installed and functioning but i don't think I can see it at the end of the error log it says error setting mtrr (base= oxf0000000, size= 0x01000000, type=1) inappropriate ioctl for device(25) i have tried to provide as much info as I understand how to provide

    Read the article

  • Have I created the recovery disk from recovery partition correctly?

    - by Tim
    I was creating recovery disk from recovery partition on my Lenovo T400 with Windows 7. 6.5 GB of the recovery partition has been occupied. But in the process, I created three DVDs. I might remember wrong, but the first two DVDs were called by the wizard as disk 1, and the third one was called disk 2. The first one has been written 0.22 GB only. Following is the content of the DVD (right click the image and select view the image in a bigger size): The second one has been written 3.97 GB as follows: The third one has been written 2.44 GB as follows: I am allowed only one time to create recovery disk. So I cannot try again. So I was wondering if I missed something? How is the creation process supposed to be like? Thanks and regards!

    Read the article

  • Virtualbox, merging snapshots and base disk

    - by Henrik
    Hi, I have a virtual machine with about 30 snapshots in branches. The current development path is 22 snapshots plus the base disk. The amount of files is seemingly having an impact now on IO and the dev laptop I'm using (don't know if it is host disk performance issues with the 140GB total size over a lot of fragments, or just the fact that it is hitting sectors distributed across a lot of files). I would like to merge the current development branch of snapshots together with the base disk, but I am unsure if the following command would produce the correct outcome. I am not able to boot this disk after the procedure completes (5-6 hours). vboxmanage clonehd "C:\VPC-Storage\.VirtualBox\Machines\CRM\Snapshots\{245b27ac-e658-470a-b978-8e62137c33b1}.vhd" "E:\crm-20100624.vhd" --format VHD --type normal Could anyone confirm if this is the correct approach or not?

    Read the article

  • Samba / smbd on Centos 6.5

    - by Satalink
    I've installed Samba4 and have the smb.conf file as follows: [global] workgroup = WORKGROUP server string = Samba Server realm = REXIALO.COM netbios name = REXIALO.COM security = user map to guest = Bad Password bind interfaces only = no interfaces = lo venet0 log file = /var/log/samba/samba.log max log size = 1000 [webroot] path = /usr/local/apache/htdocs comment = Example.com webroot directory read only = No I can connect from the same server with smbclient. Localhost: # smbclient -L localhost -U root Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- -------Enter root's password: network: # smbclient -L rexialo.com -U Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- ------- The problem is when I try to map to the smb webroot from Windows 7, it asks for user/pass but just times out and then prompts for credentials. The samba.log file does not show any activity other than the startup of the smbd process. Any help would be appreciated.

    Read the article

  • Unable to list contents/remove directory (linux ext3)

    - by RedKrieg
    System is CentOS5 x86_64, completely up to date. I've got a folder that can't be listed (ls just hangs, eating memory until it is killed). The directory size is nearly 500k: root@server [/home/user/public_html/domain.com/wp-content/uploads/2010/03]# stat . File: `.' Size: 458752 Blocks: 904 IO Block: 4096 directory Device: 812h/2066d Inode: 44499071 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 3292/ user) Gid: ( 3287/ user) Access: 2012-06-29 17:31:47.000000000 -0400 Modify: 2012-10-23 14:41:58.000000000 -0400 Change: 2012-10-23 14:41:58.000000000 -0400 I can see the file names if I use ls -1f, but it just repeats the same 48 files ad infinitum, all of which have non-ascii characters somewhere in the file name: La-critic\363-al-servicio-la-privacidad-300x160.jpg When I try to access the files (say to copy them or remove them) I get messages like the following: lstat("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Sebast\355an-Pi\361era-el-balc\363n-150x120.jpg", 0x7fff364c52c0) = -1 ENOENT (No such file or directory) I tried altering the code found on this man page and modified the code to call unlink for each file. I get the same ENOENT error from the unlink call: unlink("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Marca-naci\363n-Madrid-150x120.jpg") = -1 ENOENT (No such file or directory) I also straced a "touch", grabbed the syscalls it makes and replicated them, then tried to unlink the resulting file by name. This works fine, but the folder still contains an entry by the same name after the operation completes and the program runs for an arbitrarily long time (strace output ended up at 20GB after 5 minutes and I stopped the process). I'm stumped on this one, I'd really prefer not to have to take this production machine (hundreds of customers) offline to fsck the filesystem, but I'm leaning toward that being the only option at this point. If anyone's had success using other methods for removing files (by inode number, I can get those with the getdents code) I'd love to hear them. (Yes, I've tried find . -inum <inode> -exec rm -fv {} \; and it still has the problem with unlink returning ENOENT) For those interested, here's the diff between that man page's code and mine. I didn't bother with error checking on mallocs, etc because I'm lazy and this is a one-off: root@server [~]# diff -u listdir-orig.c listdir.c --- listdir-orig.c 2012-10-23 15:10:02.000000000 -0400 +++ listdir.c 2012-10-23 14:59:47.000000000 -0400 @@ -6,6 +6,7 @@ #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> +#include <string.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) @@ -17,7 +18,7 @@ char d_name[]; }; -#define BUF_SIZE 1024 +#define BUF_SIZE 1024*1024*5 int main(int argc, char *argv[]) { @@ -26,11 +27,16 @@ struct linux_dirent *d; int bpos; char d_type; + int deleted; + int file_descriptor; fd = open(argc > 1 ? argv[1] : ".", O_RDONLY | O_DIRECTORY); if (fd == -1) handle_error("open"); + char* full_path; + char* fd_path; + for ( ; ; ) { nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); if (nread == -1) @@ -55,7 +61,24 @@ printf("%4d %10lld %s\n", d->d_reclen, (long long) d->d_off, (char *) d->d_name); bpos += d->d_reclen; + if ( d_type == DT_REG ) + { + full_path = malloc(strlen((char *) d->d_name) + strlen(argv[1]) + 2); //One for the /, one for the \0 + strcpy(full_path, argv[1]); + strcat(full_path, (char *) d->d_name); + + //We're going to try to "touch" the file. + //file_descriptor = open(full_path, O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666); + //fd_path = malloc(32); //Lazy, only really needs 16 + //sprintf(fd_path, "/proc/self/fd/%d", file_descriptor); + //utimes(fd_path, NULL); + //close(file_descriptor); + deleted = unlink(full_path); + if ( deleted == -1 ) printf("Error unlinking file\n"); + break; //Break on first try + } } + break; //Break on first try } exit(EXIT_SUCCESS);

    Read the article

  • Unable to mount Amazon Public Dataset using ec2-create-volume

    - by the0ther
    I am trying to use a Public Dataset with the snapshot id of snap-­e1608d88. I am looking at these instructions, but they do not seem to help. The first suggestion there says I should click on Volumes and create a new volume, set it's size and availability zone, as well as specifying the snapshot id. The problem is, snapshot id is a dropdown, not a text field, and there are over 100 options in the dropdown. Next I installed the ec2 command line tools and tried to run the ec2-create-volume command. For my first attempt I tried ec2-create-volume --snapshot snap-­e1608d88 --availability-zone us-east-1 but that gave output indicating I need to provide a certificate with the --cert switch. Which certficate exactly? I tried my SSH cert at ~/.ssh/id_rsa. No dice. I got the following Java error: "org.codehaus.xfire.fault.XFireFault: General security error;"

    Read the article

  • When I plug an HDMI cable into my MacBook Pro with Retina Display, the resolution of the retina display changes

    - by Tyler Crompton
    I had an HDMI cable plugged in for a while that connected to a TV with no power. The retina display randomly flashed blue after a while and then changed to a different resolution (possibly 1080p). I unplugged the HDMI cable and nothing happened. So I restarted the computer to no avail. I was still having issues. So I fiddle with some settings after plugging in the HDMI cable again. Now, the retina display appears fine when no HDMI cable is plugged in, but when there is one plugged in, the resolution changes. I have mirroring turned off (which is what I want). The resolution of the retina display when an HDMI cable is plugged in is 3,360px x 2,100px (according to the size of a screenshot) and the external display is 1,600px is 900px. How can I fix the retina display to show with the correct resolution.

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • Strange Phantom Local Disks appearing in my drive list...

    - by Paul
    Win7 Home Prem 32 bit I seem to have several phantom Local Disks mapped to different letters, they are of 0 bytes in size? Strangely they do not show up when i view my drives through windows explorer but if i open an application such as ACDSee Pro or MS Word and then go to open a file i can see all these Local Disks mapped to different letters. This means when i plug in my external hard disk it ends up mapped to letter R instead of its usual G which messes up any programs i have pointing to it by default. How did they get there and more importantly how do i get rid of them please??

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >