Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 10/67 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • "Windows detected a hard drive" issue in Windows 7 x64

    - by Jasiu
    I upgraded to the OCZ-Agility3 120GB from a 60 OCZ Vertex2 SSD. I cloned the drive from the Vertex to the new Agility. Everything seemed to have gone well and have not had any problems. Recently in the passed month I have gotten this error: I downloaded teh OCZToolboxMP and ran the SMART utility and don't see anything wrong: SMART READ DATA ModelNumber : OCZ-AGILITY3 Serial Number : OCZ-Y1945X77438P4NU6 WWN : 5-e8-3a-97 ebea5ba76 Revision: 10 Attributes List 1: SSD Raw Read Error Rate Normalized Rate: 70 total ECC and RAISE errors 5: SSD Retired Block Count Reserve blocks remaining: 100% 9: SSD Power-On Hours Total hours power on: 968 12: SSD Power Cycle Count Count of power on/off cycles: 28 171: SSD Program Fail Count Total number of Flash program operation failures: 0 172: SSD Erase Fail Count Total number of Flash erase operation failures: 0 174: SSD Unexpected power loss count Total number of unexpected power loss: 11 177: SSD Wear Range Delta Delta between most-worn and least-worn Flash blocks: 0 181: SSD Program Fail Count Total number of Flash program operation failures: 0 182: SSD Erase Fail Count Total number of Flash erase operation failures: 0 187: SSD Reported Uncorrectable Errors Uncorrectable RAISE errors reported to the host for all data access: 4145 194: SSD Temperature Monitoring Current: 30 High: 30 Low: 30 195: SSD ECC On-the-fly Count Normalized Rate: 120 196: SSD Reallocation Event Count Total number of reallocated Flash blocks: 100 201: SSD Uncorrectable Soft Read Error Rate Normalized Rate: 120 204: SSD Soft ECC Correction Rate (RAISE) Normalized Rate: 120 230: SSD Life Curve Status Current state of drive operation based upon the Life Curve: 100 231: SSD Life Left Approximate SDD life Remaining: 100% 241: SSD Lifetime writes from host lifetime writes 893 GB 242: SSD Lifetime reads from host lifetime reads 968 GB Does anyone have any ideas of what might be wrong and or how I can go about fixing this? Please let me know if there is other information I can provide. Thanks for your help Windows 7 x64 SP1 AMD Phenom II X4 940 8GB RAM

    Read the article

  • High Sqlservr.exe Memory Usage

    - by user18576
    I have a problem with sqlservr.exe (version 2008). It use a more memory. I checked on windows taskbar manager, sqlservr.exe usage ( Mem usage - 8GB Ram). I dont know how can I fix it.Got the following metrics of the server using Perfmon: SQLServer:Buffer Manager Buffer cache hit ratio 13 SQLServer:Buffer Manager Page lookups/sec 46026128096 SQLServer:Buffer Manager Free pages 129295 SQLServer:Buffer Manager Total pages 997309 SQLServer:Buffer Manager Target pages 1053560 SQLServer:Buffer Manager Database pages 484117 SQLServer:Buffer Manager Reserved pages 0 SQLServer:Buffer Manager Stolen pages 383897 SQLServer:Buffer Manager Lazy writes/sec 384369 SQLServer:Buffer Manager Readahead pages/sec 69315446 SQLServer:Buffer Manager Page reads/sec 71280353 SQLServer:Buffer Manager Page writes/sec 12408371 SQLServer:Buffer Manager Checkpoint pages/sec 7053801 SQLServer:Buffer Manager Page life expectancy 735262 SQLServer:General Statistics Active Temp Tables 161 SQLServer:General Statistics Temp Tables Creation Rate 3131845 SQLServer:General Statistics Logins/sec 2336011 SQLServer:General Statistics Logouts/sec 2335984 SQLServer:General Statistics User Connections 27 SQLServer:General Statistics Transactions 0 SQLServer:Access Methods Full Scans/sec 34422821 SQLServer:Access Methods Range Scans/sec 2027247756 SQLServer:Access Methods Workfiles Created/sec 49771600 SQLServer:Access Methods Worktables Created/sec 28205828 SQLServer:Access Methods Index Searches/sec 4890715219 SQLServer:Access Methods FreeSpace Scans/sec 21178928 SQLServer:Access Methods FreeSpace Page Fetches/sec 21226653 SQLServer:Access Methods Pages Allocated/sec 41483279 SQLServer:Access Methods Extents Allocated/sec 4743504 SQLServer:Access Methods Extent Deallocations/sec 4806606 SQLServer:Access Methods Page Deallocations/sec 41419137 SQLServer:Access Methods Page Splits/sec 23834799 SQLServer:Memory Manager SQL Cache Memory (KB) 29160 SQLServer:Memory Manager Target Server Memory (KB) 8428480 SQLServer:Memory Manager Total Server Memory (KB) 7978472 Some body could help me please.And I really want to know the cause for the above.

    Read the article

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

  • mysql is not connecting after data directory change

    - by user123827
    I've changed data directory in /etc/my.cnf. datadir=/data/mysql socket=/data/mysql/mysql.sock I also moved mysql folder from /var/lib/mysql/ to /data/mysql Now when i connect to mysql i get following error: [root@youradstats-copy mysql]# mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) also when i see /var/logs/msqld.log i get following messages in that: InnoDB: Setting log file /data/mysql/ib_logfile0 size to 512 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 300 400 500 120704 7:43:31 InnoDB: Log file /data/mysql/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /data/mysql/ib_logfile1 size to 512 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 300 400 500 InnoDB: Cannot initialize created log files because InnoDB: data files are corrupt, or new data files were InnoDB: created when the database was started previous InnoDB: time but the database was not shut down InnoDB: normally after that. 120704 7:43:36 [ERROR] Plugin 'InnoDB' init function returned error. 120704 7:43:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. I shut down mysql properly before doing these changes and then started it properly but dont know why getting these messages. please help to solve issue as i have changed socket path in my.cnf but still its pointing to old path...

    Read the article

  • High disk I/O - jbd2/sda2-8 process

    - by Evan Hamlet
    I have run a file server on a CentOS 5.8 final server. My only concern at the moment is what appears to be intermittent but continuous high disk I/O activity causing a general slowdown because of jbd2/sda2-8 process. jbd2/sda2-8 is making use of /dev/sda2, which is the 2nd partition of the first harddrive (IE: root partition). More info: using "iotop" the culprit appears to be "jbd2/sda1-8" making writes every second, which appears to be a kernel process associated with journaling on the ext4 filesystem, if my googling around is correct. I see "jbd2/sda2-8" appearing here every now and then, but certainly not every 3 seconds.. when idle, it appears about 1 or 2 times per minute. When I'm using the system, it appears more frequently. ATOP results: http://grabilla.com/02b14-8022db2e-4eb9-4f10-8e10-d65c49ad7530.png IOTOP results: http://grabilla.com/02b14-cf74b25d-4063-4447-9210-7d1b9b70e25b.png HTOP results: grabilla. com/02b14-ad8cad0e-89b0-46d3-849d-4fd515c1e690.png jbd2/sda2-8 is the processes I see with iotop making writes on disk even though it's not in use at all. Does someone has any idea how could I solve the high disk usage caused jbd2/sda2-8 process?

    Read the article

  • How can I find out if a port is opened or not?

    - by Roman
    I have installed Apache server on my Windows 7 computer. I was able to display the default index.php by typing http://localhost/ in the address line of my browser. However, I am still unable to see this page by typing IP address of my computer (neither locally (from the same computer) no globally (from another computer connected to the Internet)). I was told that I need to open port 80. I did it (in a way described here) but it did not solve the problem. First of all I would like to check which ports are opened and which are not. For example I am not sure that my port 80 was closed before I tried to open. I am also not sure that it is opened after I tried to open it. I tried to run a very simple web server written in Python. For that I used port 81 and it worked! And I did not try to open the port 81. So, it was opened by default. So, if 81 is opened by default, why 80 is not? Or it is? ADDITIONAL INFORMATION: 1. In my httpd.conf file I have "Listen 80". 2. This site tells me that port 80 on my computer is opened. 3. I get different responses if I try http://myip:80 and http://myip:81. In the last case browser (Chrome) writes me that link is broken. In the first case I get: Forbidden You don't have permission to access / on this server. 4. IE writes that "The website declined to show this webpage".

    Read the article

  • TFTP Timing Out on Ubuntu VM

    - by valsidalv
    I'm running a Windows 7 PC with VMware installed which has my Ubuntu (10.04 Lucid Lynx). I recently installed a DHCP server and TFTP (Xinet tftpd) using these instructions. I've mapped a network drive so that my Windows has access to all the files in my VM through a 192.x.x.x IP address. I'm trying to throw some custom firmware onto a router. The router has its own built-in TFTP utility that will download the image. It successfully manages to do everything but it is slow because it writes it to flash memory. There is another method that is much quicker because it writes to RAM directly but it must use the TFTP server in Ubuntu. The issue I'm facing is that the Ubuntu TFTP transfer seems to be timing out. The transfer starts but never goes past ~60%. Here's my /etc/xinetd.d/tftp file (similar to a known working config): service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /home/user/tftp/ disable = no cps = 300 2 per_source = 60 } I've done some searching but can't find any parameters for this file to control timeout time or number of retries. The last two arguments (cps, per_source) and completely alien to me (can anyone explain). I have a few possible solutions but the easiest would be to get this TFTP server working. Can anyone help? Either with a timeout configuration or maybe even recommend a different TFTP server? Thanks!

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • Hibernate between OS X and Bootcamp Win 7

    - by Willem
    Wouldn't it be great if someone wrote a guide or an app which allowed you to switch instantly between OS X and Windows using Hibernate in both OS:s? Windows 7 already has an option "Hibernate" which allows you to boot back to your OS X partition, but OS X does not exactly offer the same. However, there are possibilities here. It seems that the recent Mac's have 3 different kinds of sleeping mode: Sleep: Low power consumption, RAM still active. Legacy Safe Sleep: No power consumption(?), writes RAM to disk and shuts down (is this the same as Hibernate?) Safe Sleep: Writes RAM to disk and enters sleep mode. If battery level drops too low it goes into Hibernate (is this Hibernate the same as #2 in this list? This is the Hibernate I will be referring to int he rest of this post) It seems that I am unable to force my MacBook Pro (Late 2011) OS X 10.7.3 into a true hibernate using either command line or apps that are supposed to do this. I believe the Mac should show that white loading bar whilst waking up if it was truly put into hibernate (which it does not). But I can get this white bar to show by letting my battery level drop to 0% so there is obviously a system function for it (obviously, duh! :). When Win 7 goes into hibernate it shuts down completely and you can then boot into OS X on startup. On OS X however, hibernate forces you to wake up into OS X. Can you hack this so that you're allowed to select boot partition after OS X hibernates? Would it be possible to use the true hibernate system functionalities of Win 7 and OS X to create a kind of instant switching between the two? Imagine this on a quick SATA-3 SSD like my 180GB Intel 520. Thanks / Willem

    Read the article

  • Possible disk IO issue

    - by Tim Meers
    I've been trying to really figure out what my IOPS are on my DB server array and see if it's just too much. The array is four 72.6gb 15k rpm drives in RAID 5. To calculate IOPS for RAID 5 the following formula is used: (reads + (4 * Writes)) / Number of disks = total IOPS. The formula is from MSDN. I also want to calculate the Avg Queue Length but I'm not sure where they are getting the formula from, but i think it reads on that page as avg que length/number of disks = actual queue. To populate that formula I used the perfmon to gather the needed information. I came up with this, under normal production load: (873.982 + (4 * 28.999)) / 4 = 247.495. Also the disk queue lengh of 14.454/4 = 3.614. So to the question, am I wrong in thinking this array has a very high disk IO? Edit I got the chance to review it again this morning under normal/high load. This time with even bigger numbers and IOPS in excess of 600 for about 5 minutes then it died down again. But I also took a look at the Avg sec/Transfer, %Disk Time, and %Idle Time. These number were taken when the reads/writes per sec were only 332.997/17.999 respectively. %Disk Time: 219.436 %Idle Time: 0.300 Avg Disk Queue Length: 2.194 Avg Disk sec/Transfer: 0.006 Pages/sec: 2927.802 % Processor Time: 21.877 Edit (again) Looks like I have that issue solved. Thanks for the help. Also for a pretty slick parser I found this: http://pal.codeplex.com/ It works pretty well for breaking down the data into something usable.

    Read the article

  • please explain these mongo statistics

    - by sivann
    My setup: I have 2 hosts, and 2 shards each. Host1 has 2 shards, and is the master of the replicas host2 has the secondaries of the 2 shards. . host1: shard1 (repset1),shard2 (repset2) host2: shard1 (repset1),shard2 (repset2) There's also a 3rd host that acts as arbitrer. I have 50 threads writing randomly to both shards (using a hash) via mongos with REPLICA_SAFE WriteConcern set on each insert. The questions: mongostat displays about 90% locked for both shards in host1 and about 1% locked on host2. Since I use REPLICA_SAFE which supposedly writes to both servers shouldn't the locks be the same? mongostat reports qr=30 for both shards of host1, and qw=0 always. Since I perform only writes how is this possible? Moreover on host2 all queues are reported 0. Faults are abut the same in all shards/hosts (arround 80). netIn/netOut on the secondaries (host2) are always about 200bytes/sec. Too low. mongos has 53 connections, host1's shards have 71 and 71 and host2's shards have 9 and 8. How is this? Please answer whatever you can. Thanks!

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • Write a signal handler to catch SIGSEGV

    - by Adi
    Hi all, I want to write a signal handler to catch SIGSEGV. First , I would protect a block of memory for read or writes using char *buffer; char *p; char a; int pagesize = 4096; " mprotect(buffer,pagesize,PROT_NONE) " What this will do is , it will protect the memory starting from buffer till pagesize for any reads or writes. Second , I will try to read the memory by doing something like p = buffer; a = *p This will generate a SIGSEGV and i have initialized a handler for this. The handler will be called . So far so good. Now the problem I am facing is , once the handler is called, I want to change the access write of the memory by doing mprotect(buffer, pagesize,PROT_READ); and continue my normal functioning of the code. I do not want to exit the function. On future writes to the same memory, I want again catch the signal and modify the write rights and then take account of that event. Here is the code I am trying : #include <signal.h> #include <stdio.h> #include <malloc.h> #include <stdlib.h> #include <errno.h> #include <sys/mman.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) char *buffer; int flag=0; static void handler(int sig, siginfo_t *si, void *unused) { printf("Got SIGSEGV at address: 0x%lx\n",(long) si->si_addr); printf("Implements the handler only\n"); flag=1; //exit(EXIT_FAILURE); } int main(int argc, char *argv[]) { char *p; char a; int pagesize; struct sigaction sa; sa.sa_flags = SA_SIGINFO; sigemptyset(&sa.sa_mask); sa.sa_sigaction = handler; if (sigaction(SIGSEGV, &sa, NULL) == -1) handle_error("sigaction"); pagesize=4096; /* Allocate a buffer aligned on a page boundary; initial protection is PROT_READ | PROT_WRITE */ buffer = memalign(pagesize, 4 * pagesize); if (buffer == NULL) handle_error("memalign"); printf("Start of region: 0x%lx\n", (long) buffer); printf("Start of region: 0x%lx\n", (long) buffer+pagesize); printf("Start of region: 0x%lx\n", (long) buffer+2*pagesize); printf("Start of region: 0x%lx\n", (long) buffer+3*pagesize); //if (mprotect(buffer + pagesize * 0, pagesize,PROT_NONE) == -1) if (mprotect(buffer + pagesize * 0, pagesize,PROT_NONE) == -1) handle_error("mprotect"); //for (p = buffer ; ; ) if(flag==0) { p = buffer+pagesize/2; printf("It comes here before reading memory\n"); a = *p; //trying to read the memory printf("It comes here after reading memory\n"); } else { if (mprotect(buffer + pagesize * 0, pagesize,PROT_READ) == -1) handle_error("mprotect"); a = *p; printf("Now i can read the memory\n"); } /* for (p = buffer;p<=buffer+4*pagesize ;p++ ) { //a = *(p); *(p) = 'a'; printf("Writing at address %p\n",p); }*/ printf("Loop completed\n"); /* Should never happen */ exit(EXIT_SUCCESS); } The problem I am facing with this is ,only the signal handler is running and I am not able to return to the main function after catching the signal.. Any help in this will be greatly appreciated. Thanks in advance Aditya

    Read the article

  • Stored proc running 30% slower through Java versus running directly on database

    - by James B
    Hi All, I'm using Java 1.6, JTDS 1.2.2 (also just tried 1.2.4 to no avail) and SQL Server 2005 to create a CallableStatement to run a stored procedure (with no parameters). I am seeing the Java wrapper running the same stored procedure 30% slower than using SQL Server Management Studio. I've run the MS SQL profiler and there is little difference in I/O between the two processes, so I don't think it's related to query plan caching. The stored proc takes no arguments and returns no data. It uses a server-side cursor to calculate the values that are needed to populate a table. I can't see how the calling a stored proc from Java should add a 30% overhead, surely it's just a pipe to the database that SQL is sent down and then the database executes it....Could the database be giving the Java app a different query plan?? I've posted to both the MSDN forums, and the sourceforge JTDS forums (topic: "stored proc slower in JTDS than direct in DB") I was wondering if anyone has any suggestions as to why this might be happening? Thanks in advance, -James (N.B. Fear not, I will collate any answers I get in other forums together here once I find the solution) Java code snippet: sLogger.info("Preparing call..."); stmt = mCon.prepareCall("SP_WB200_POPULATE_TABLE_limited_rows"); sLogger.info("Call prepared. Executing procedure..."); stmt.executeQuery(); sLogger.info("Procedure complete."); I have run sql profiler, and found the following: Java app : CPU: 466,514 Reads: 142,478,387 Writes: 284,078 Duration: 983,796 SSMS : CPU: 466,973 Reads: 142,440,401 Writes: 280,244 Duration: 769,851 (Both with DBCC DROPCLEANBUFFERS run prior to profiling, and both produce the correct number of rows) So my conclusion is that they both execute the same reads and writes, it's just that the way they are doing it is different, what do you guys think? It turns out that the query plans are significantly different for the different clients (the Java client is updating an index during an insert that isn't in the faster SQL client, also, the way it is executing joins is different (nested loops Vs. gather streams, nested loops Vs index scans, argh!)). Quite why this is, I don't know yet (I'll re-post when I do get to the bottom of it) Epilogue I couldn't get this to work properly. I tried homogenising the connection properties (arithabort, ansi_nulls etc) between the Java and Mgmt studio clients. It ended up the two different clients had very similar query/execution plans (but still with different actual plan_ids). I posted a summary of what I found to the MSDN SQL Server forums as I found differing performance not just between a JDBC client and management studio, but also between Microsoft's own command line client, SQLCMD, I also checked some more radical things like network traffic too, or wrapping the stored proc inside another stored proc, just for grins. I have a feeling the problem lies somewhere in the way the cursor was being executed, and it was somehow giving rise to the Java process being suspended, but why a different client should give rise to this different locking/waiting behaviour when nothing else is running and the same execution plan is in operation is a little beyond my skills (I'm no DBA!). As a result, I have decided that 4 days is enough of anyone's time to waste on something like this, so I will grudgingly code around it (if I'm honest, the stored procedure needed re-coding to be more incremental instead of re-calculating all data each week anyway), and chalk this one down to experience. I'll leave the question open, big thanks to everyone who put their hat in the ring, it was all useful, and if anyone comes up with anything further, I'd love to hear some more options...and if anyone finds this post as a result of seeing this behaviour in their own environments, then hopefully there's some pointers here that you can try yourself, and hope fully see further than we did. I'm ready for my weekend now! -James

    Read the article

  • Why your Netapp is so slow...

    - by Darius Zanganeh
    Have you ever wondered why your Netapp FAS box is slow and doesn't perform well at large block workloads?  In this blog entry I will give you a little bit of information that will probably help you understand why it’s so slow, why you shouldn't use it for applications that read and write in large blocks like 64k, 128k, 256k ++ etc..  Of course since I work for Oracle at this time, I will show you why the ZS3 storage boxes are excellent choices for these types of workloads. Netapp’s Fundamental Problem The fundamental problem you have running these workloads on Netapp is the backend block size of their WAFL file system.  Every application block on a Netapp FAS ends up in a 4k chunk on a disk. Reference:  Netapp TR-3001 Whitepaper Netapp has proven this lacking large block performance fact in at least two different ways. They have NEVER posted an SPC-2 Benchmark yet they have posted SPC-1 and SPECSFS, both recently. In 2011 they purchased Engenio to try and fill this GAP in their portfolio. Block Size Matters So why does block size matter anyways?  Many applications use large block chunks of data especially in the Big Data movement.  Some examples are SAS Business Analytics, Microsoft SQL, Hadoop HDFS is even 64MB! Now let me boil this down for you.  If an application such MS SQL is writing data in a 64k chunk then before Netapp actually writes it on disk it will have to split it into 16 different 4k writes and 16 different disk IOPS.  When the application later goes to read that 64k chunk the Netapp will have to again do 16 different disk IOPS.  In comparison the ZS3 Storage Appliance can write in variable block sizes ranging from 512b to 1MB.  So if you put the same MSSQL database on a ZS3 you can set the specific LUNs for this database to 64k and then when you do an application read/write it requires only a single disk IO.  That is 16x faster!  But, back to the problem with your Netapp, you will VERY quickly run out of disk IO and hit a wall.  Now all arrays will have some fancy pre fetch algorithm and some nice cache and maybe even flash based cache such as a PAM card in your Netapp but with large block workloads you will usually blow through the cache and still need significant disk IO.  Also because these datasets are usually very large and usually not dedupable they are usually not good candidates for an all flash system.  You can do some simple math in excel and very quickly you will see why it matters.  Here are a couple of READ examples using SAS and MSSQL.  Assume these are the READ IOPS the application needs even after all the fancy cache and algorithms.   Here is an example with 128k blocks.  Notice the numbers of drives on the Netapp! Here is an example with 64k blocks You can easily see that the Oracle ZS3 can do dramatically more work with dramatically less drives.  This doesn't even take into account that the ONTAP system will likely run out of CPU way before you get to these drive numbers so you be buying many more controllers.  So with all that said, lets look at the ZS3 and why you should consider it for any workload your running on Netapp today.  ZS3 World Record Price/Performance in the SPC-2 benchmark ZS3-2 is #1 in Price Performance $12.08ZS3-2 is #3 in Overall Performance 16,212 MBPS Note: The number one overall spot in the world is held by an AFA 33,477 MBPS but at a Price Performance of $29.79.  A customer could purchase 2 x ZS3-2 systems in the benchmark with relatively the same performance and walk away with $600,000 in their pocket.

    Read the article

  • How would you gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    I'm relatively new to StackExchange and not sure if it's appropriate place to ask design question. Site gives me a hint "The question you're asking appears subjective and is likely to be closed". Please let me know. Anyway.. One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting? Thank you very much in advance for your thoughts.

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • How can I gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting?

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • Making user input/math on data fast, unlike excel type programs

    - by proGrammar
    I'm creating a research platform solely for myself to do some research on data. Programs like excel are terribly slow for me so I'm trying to come up with another solution. Originally I used excel. A1 was the cell that contained the data and all other cells in use calculated something on A1, or on other cells, that all could be in the end traced to A1. A1 was like an element of an array, I then I incremented it to go through all my data. This was way too slow. So the only other option I found originally was to hand code in c# the calculations inside a loop. Then I simply recompiled each time I changed my math. This was terribly slow to do and I had to order everything correctly so things would update correctly (dependencies). I could have also used events, but hand coding events for each cell like calculation would also be very slow. Next I created an application to read Excel and to perfectly imitate it. Which is what I now use. Basically I write formulas onto a fraction of my data to get live results inside excel. Then my program reads excel, writes another c# program, compiles it, and runs that program which runs my excel created formulas through a lot more data a whole lot faster. The advantage being my application dependency sorts everything (or I could use events) so I don't have to (like excel does) And of course the speed. But now its not a single application anymore. Instead its 2 applications, one which only reads my formulas and writes another program. The other one being the result which only lives for a short while before I do other runs through my data with different formulas / settings. So I can't see multiple results at one time without introducing even more programs like a database or at least having the 2 applications talking to each other. My idea was to have a dll that would be written, compiled, loaded, and unloaded again and again. So a self-updating program, sort of. But apparently that's not possible without another appdomain which means data has to be marshaled to be moved between the appdomains. Which would slow things down, not for summaries, but for other stuff I need to do with all my data. I'm also forgetting to mention a huge problem with restarting an application again and again which is having to reload ALL my data into memory again and again. But its still a whole lot faster than excel. I'm really super puzzled as to what people do when they want to research data fast. I'm completely unable to have a program accept user input and having it fast. My understanding is that it would have to do things like excel which is to evaluate strings again and again. So my only option is to repeatedly compile applications. Do I have a correct understanding on computer science? I've only just began programming, and didn't think I would have to learn much to do some simple math on data. My understanding is its either compiling my user defined stuff to a program or evaluating them from a string or something stupid again and again. And my only option is to probably switch operating systems or something to be able to have a program compile and run itself without stopping (writing/compiling dll, loading dll to program, unloading, and repeating). Can someone give me some idea on how computers work? Is anything better possible? Like a running program, that can accept user input and compile it and then unload it later? I mean heck operating systems dont need to be RESTARTED with every change to user input. What is this the cave man days? Sorry, it's just so super frustrating not knowing what one can do, and can't do. If only I could understand and learn this stuff fast enough.

    Read the article

  • Help with redirection for .com, .net and .org domains.

    - by user198553
    Hi all! I need help with some rules in ISAPI_Rewrite in my installation. I'm going to be very honest about my needs. I need to do this configuration in the next few hours, and don't have time right now understand everything about rewrites, regular expressions and such. I really think you can help me, if I had more reputation I would even set up a bounty... :( In fact, I believe that what I need is simple: I have a .com domain. The main url of my website is going to be http:// www.mainurl.com/. I have two other domains: mainurl.net and mainurl.org. What I need (in isapi-rewrite 2, the config made with httpd.ini file in root file) is: everytime someone writes mainurl.net in browser it becomes http:// www.mainurl.com/ 301 redirect. If it's written without www becomes http:// www.mainurl.com/. If someone writes mainurl.net/about it becomes http:// www.mainurl.com/about/. Redirect always the .com, the www part and the final slash /. Thanks in advance you all!

    Read the article

  • Looking for a lock-free RT-safe single-reader single-writer structure

    - by moala
    Hi, I'm looking for a lock-free design conforming to these requisites: a single writer writes into a structure and a single reader reads from this structure (this structure exists already and is safe for simultaneous read/write) but at some time, the structure needs to be changed by the writer, which then initialises, switches and writes into a new structure (of the same type but with new content) and at the next time the reader reads, it switches to this new structure (if the writer multiply switches to a new lock-free structure, the reader discards these structures, ignoring their data). The structures must be reused, i.e. no heap memory allocation/free is allowed during write/read/switch operation, for RT purposes. I have currently implemented a ringbuffer containing multiple instances of these structures; but this implementation suffers from the fact that when the writer has used all the structures present in the ringbuffer, there is no more place to change from structure... But the rest of the ringbuffer contains some data which don't have to be read by the reader but can't be re-used by the writer. As a consequence, the ringbuffer does not fit this purpose. Any idea (name or pseudo-implementation) of a lock-free design? Thanks for having considered this problem.

    Read the article

  • Help with redirection and .com, .net and .ord domains.

    - by user198553
    Hi all! I need help with some rules in ISAPI_Rewrite in my installation. I'm going to be very honest about my needs. I need to do this configuration in the next few hours, and don't have time right now understand everything about rewrites, regular expressions ans such. I really think you can help me, if I had more reputation I would even set up a bounty... :( In fact, I believe that what I need is simple: I have a .com domain. The main url of my website is going to be http:// www.mainurl.com/. I have two other domains: mainurl.net and mainurl.org. What I need (in isapi-rewrite 2, the config made with httpd.ini file in root file) is: everytime someone writes mainurl.net in browser it becomes http:// www.mainurl.com/ 301 redirect. If it's written without www becomes http:// www.mainurl.com/. If someone writes mainurl.net/about it becomes http:// www.mainurl.com/about/. Redirect always the .com, the www part and the final slash /. Thanks in advance you all!

    Read the article

  • Help with redirection for .com, .net and .org domains: redirecting all of them to .com.

    - by user198553
    Hi all! I need help with some rules in ISAPI_Rewrite in my installation. (If you only know mod_rewrite could be a good help to, so I would adapt the configuration). I'm going to be very honest about my needs. I need to do this configuration in the next few hours, and don't have time right now understand everything about rewrites, regular expressions and such. I really think you can help me, if I had more reputation I would even set up a bounty... :( In fact, I believe that what I need is simple: I have a .com domain. The main url of my website is going to be http:// www.mainurl.com/. I have two other domains: mainurl.net and mainurl.org. What I need (in isapi-rewrite 2, the config made with httpd.ini file in root file) is: everytime someone writes mainurl.net in browser it becomes http:// www.mainurl.com/ 301 redirect. If it's written without www becomes http:// www.mainurl.com/. If someone writes mainurl.net/about it becomes http:// www.mainurl.com/about/. Redirect always the .com, the www part and the final slash /. Thanks in advance you all!

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >