Search Results

Search found 39440 results on 1578 pages for 'possible homework'.

Page 383/1578 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • pgpool2+streaming replication failover on only 2 servers?

    - by aneez
    I am trying to configure pgpool2 and postgresql 9.1 to handle failover. I currently have streaming replication running, and are using pgpool2 for read-only load balancing. I have 2 servers in my setup, both running postgresql - 1 master and 1 slave. The master is also running pgpool2. My question is how do I configure this setup to handle failover? Specifically in the case that the master crashes, and the slave has to take over and run pgpool2 as well. Most documentation and examples I have been able to find assumes that pgpool2 is running on a separate server and thus "never" crashes. I may or may not be attacking the problem using the wrong tools. In my production setup I have a total of 3 identical servers all in independent locations. The main goal of the setup is to achieve a high uptime. Thus failover should be automatic, and bringing a failed node back up should cause only minimal downtime. I want all 3 nodes to be as close to identical as possible, and be able to run with just 1 or 2 nodes available. And if possible I want to use load balancing to improve performance. If anyone can help me gain some insight into how to do this using my current setup or suggest a different/better setup. Thank you!

    Read the article

  • Installing Windows 8 to another computer with an OEM product Key

    - by user180671
    Questions come up after reading the article about the new method Microsoft uses to License windows 8 computers. Let say I bought a Brand new laptop with windows 8 preloaded. Not like the old way, there is no OEM sticker in the back of the computer which can be used for reloading system.(new product key is stored in BIOS as mentioned in the article, the key can be pulled out by using a software anyway). Is it possible to install windows 8 on another computer with that particular key in case the computer is totally damaged? Here is what i tried: First, I extracted the key with a software name "windows 8 key viewer". using the windows 8 upgrade tool to determine what copy of windows 8 I should download for the installation. The tool did correctly recognized the key as a legitimate one. but it claims that key can not be installed with a retail media (Since it is an oem key). Does this mean the only way to do it is to use an OEM CD from the manufacturer? Will ISO from MSDN source do? or it is just not possible??

    Read the article

  • Skipping nginx PHP cache for certain areas of a site?

    - by DisgruntledGoat
    I have just set up a new server with nginx (which I am new to) and PHP. On my site there are essentially 3 different types of files: static content like CSS, JS, and some images (most images are on an external CDN) main PHP/MySQL database-driven website which essentially acts like a static site dynamic PHP/MySQL forum It is my understanding from this question and this page that the static files need no special treatment and will be served as fast as possible. I followed the answer from the above question to set up caching for PHP files and now I have a config like this: location ~ \.php$ { try_files $uri =404; fastcgi_cache one; fastcgi_cache_key $scheme$host$request_uri; fastcgi_cache_valid 200 302 304 30m; fastcgi_cache_valid 301 1h; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example$fastcgi_script_name; fastcgi_param HTTPS off; } However, now I want to prevent caching on the forum (either for everyone or only for logged-in users - haven't checked if the latter is feasible with the forum software). I've heard that "if is evil" inside location blocks, so I am unsure how to proceed. With the if inside the location block I would probably add this in the middle: if ($request_uri ~* "^/forum/") { fastcgi_cache_bypass 1; } # or possible this, if I'm able to cache pages for anonymous visitors if ($request_uri ~* "^/forum/" && $http_cookie ~* "loggedincookie") { fastcgi_cache_bypass 1; } Will that work fine, or is there a better way to achieve this?

    Read the article

  • Driver corruption when deploying Dell Touchpad Drivers (with software) during imaging process

    - by BigHomie
    We're an sccm shop, and use it to deploy Windows. When deploying Dell laptops (multiple models), the touchpad drivers seem install properly, but the software doesn't. The resulting problem is that when the touchpad is pressed on occasion, the mouse pointer will 'jump' to certain points on the screen. A possible symptom of this problem/visible sign is if the touchpad icon isn't in the system tray. The software is in the control panel, but when opened part of the gui is pixelated, indicating botched install maybe? The manual resolution to this, is to go into device manager and uninstall the driver with the option to uninstall all driver software. After a restart, the driver and software is apparently reinstalled, and from there works as expected. Obviously this partially defeats the purpose of a zero touch deployment. If anyone knows why this is and/or a possible workaround, those answers would be valid as well. Barring that, I want to find a way to deploy the driver and touchpad software in an unattended way, so that it can be conditionally installing during the imaging process. To be honest I'm not sure how to troubleshoot this, I suppose I could try drvinst.exe to install the driver, but finding out why this fails initially would keep me from spinning my wheels.

    Read the article

  • Google Apps routing to different servers, depending on domain

    - by Philip
    We are investigating Google Apps for Education for our group of schools. Currently, each school uses their own Exchange (2003) server. Each school has its own domain which I have added to Google Apps as additional domains. I would like to start transitioning certain staff and some new pupils over to Google Apps to start testing. In this interim phase, I need mail to be routed through Google Apps and then, if no appropriate mail box is found, route on to the individual schools depending on the recipient. I do know that it is possible to route mail that does not have an appropriate Google Apps mail account to a single server - under "Settings / E-mail Settings / General Settings / Routing / E-mail routing". This works well for a single organisation where all the extra mail is destined for one place. I do know that it is possible to set up Routes, under "Settings / E-mail Settings / Hosts" and then use rules, found under "Settigns / E-mail Settings / General Settings / Routing / Receiving Routing". I can then filter based on e-mail domain and forward on to the necessary server. My problem with this, as I understand it, is that it ignores the users that have Google Apps accounts set up and sends all mail to the Exchange server. Are there any solutions for this predicament? Many thanks!

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • How to stop split tunnelling over cisco VPN (OS X)?

    - by Notre
    I'm using OS X (Snow Leopard) and the built in Cisco IP Sec client to connect to my corporate VPN. Currently, everything works as designed, and desired for most people. However, I would like to be able to funnel all traffic (particularly all web browser traffic) through the VPN. (Note - I'm an end user here, not the network administrator). Is this possible? In searching around, most people are looking to do the opposite; break out the VPN and enable split tunnelling of data. I'd like to avoid the split tunnelling. Is there some setting I can make in my OS X client to make this happen? I ran across a post where routing table changes are made to force split tunnelling: how to force split tunnel routing on mac -> cisco vpn I'm thinking something similar to that might work, but I'm not a networking expert so I'm not sure where to start (or if it is even possible). Thank you! Notre

    Read the article

  • How to upgrade iPhone 3G to iOS 5 now that iOS 6 is out? [closed]

    - by mmmshuddup
    My friend has an old iPhone 3G and I wanted to upgrade it to iOS 5 because I had read that the performance is actually better with iOS 5 on that phone in spite of the difference in processor power. The problem is that now that iOS 6 is out, iTunes only gives me the option to upgrade to that. I am a little more leery of iOS 6 knowing that it was launched mostly for the iPhone 5 and since that phone has a way faster processor, I would assume that performance would be an issue on the iPhone 3G. How can I, if possible, upgrade just to iOS 5? Note: I would like to avoid having to jailbreak the phone by all means possible. (It's not mine and I don't want to take the risk with a phone that doesn't belong to me.) EDIT: If anyone knows of a good site to get help on questions like this let me know. Apparently you're allowed to ask questions about iPhones here, just not this. Which is completely utter asinine, but oh well. Anyone with helpful ideas, post in the comments please.

    Read the article

  • Can I use squid (or anything) to do this?

    - by user269334
    I have a really crappy VPS, and a really good computer at my office (with a really good internet connection), but behind a NAT. Is it possible to expose my good computer by doing this: 1. The good computer connects to the VPS (and keeps the connection alive) 2. The users connects to the VPS, and sends http(s) requests to the VPS. 3. The VPS just passes that http(s) requests to the good computer (including some identifications, so the servers can distinguish connections) 4. The good computer passes that http(s) response to the VPS 5. In turn, the VPS receives the http(s) response, and passes back to the client. Is it possible to do this? (btw, the VPS and the good computer are located in different countries) And also, is this "reverse proxy"? I heard that reverse proxy is for protecting the internal network by putting a middle server. And will this affect SSL configurations? (or make SSL impossible?) I'm intending to run nginx on the good computer. Thanks in advance : )

    Read the article

  • Combine multiple network interfaces to connect to a dedicated server

    - by Dženis Macanovic
    this is an underpaid employee writing, who's apparently responsible for all the IT stuff in a very small (non-IT) company. Today said company got a bunch of PCs/workstations, a switch, a computer that's supposed to be used as a router, two DSL connections (each 16 MBit/s downstream and 1 MBit/s upstream) and a dedicated server which is hosted and managed professionally by a larger local company with some decent connection speed (1 GBit/s both directions if I'm not mistaken). This is what I've set up (note I'm not making use of the second DSL connection at all)... ETH0 ETH1 [ SWITCH ]---[LINUX DEBIAN ROUTER]---[DSL MODEM 1]---[INTERNET] | | | PC1 | | PC2 | ... ... when my boss asked me, if it was somehow possible to get 32 MBit/s downstream and 2 MBit/s upstream. At that time I replied "no" without thinking too much about it. Now I've just had the following idea... ETH1 ETH0 ETH0 ,---[DSL MODEM 1 (NON-STATIC IP)]---, ,---, ETH0 [ SWITCH ]---[LINUX DEBIAN ROUTER] [INTERNET] [LINUX DEBIAN SERVER]---[INTERNET] | | | '---[ DSL MODEM 2 (STATIC IP) ]---' '---' PC1 | | ETH2 ETH0 PC2 | ... ... but I have absolutely no clue how to implement that. Would that even be possible? What would the masquerading rules look like on the router? What about the server? I didn't find anything on the internet, mainly because I couldn't come up with any good keywords to search for to begin with. English obviously isn't my first language. Thanks in advance for your time!

    Read the article

  • IIS 7: launch unique site instance per host name

    - by OlduwanSteve
    Is it possible to configure IIS 7 so that a single site with multiple bindings (or wildcard bindings) will launch a unique instance for each unique host name? To explain why this is desirable, we have an application that retrieves its configuration from a remote system. The behaviour of the application is governed by this configuration and not by the 'web.config'. The application uses its host name as a key to retrieve the configuration. Currently it is a manual process to create an identical IIS site for each instance of the application, differing only by the bindings. My thought, if it were possible, is that it would be nice to have one IIS site that effectively works as a template for an arbitrary number of dynamic sites. Whenever it is accessed by a unique host name a new instance of the site would be launched, and all further requests to that host name would go to that instance just as though I had created the site by hand. I use IIS regularly, but only for fairly straightforward site hosting. I'd like to know if this could be configured with vanilla IIS 7, but would also welcome answers that require a plugin or 3rd party product. Programming/architectural suggestions about changes to the app wouldn't really be appropriate for serverfault.

    Read the article

  • Script apparently changing file permissions on Mac OS to 000

    - by half_bit
    I wrote a little shellscript that helps installing a web application. The script itself just downloads a zip archive, extracts it and changes the permissions of the extracted files to the one needed to run the webapp. The problem now is that some users reported that after running my script, all the permissions of every file in their home directory or even on their whole computer changed to 000 (except the actual unzipped files which do have the correct permissions). The only lines in my script actually doing IO are these: URL="http://foo.com/" FILENAME="some.zip" curl --silent "$URL$FILENAME" -o $FILENAME > /dev/null echo "Unzipping...\c" if unzip -oqq $FILENAME > /dev/null then chmod -R 777 app/tmp app/webroot app/Config/database* app/configuration* chown -R www:www * rm $FILENAME echo "\t\t\tOK" exit 0 else echo "\t\t\tERROR" exit 1 fi I seriously can't explain this to myself. How can this even be possible? It is entirely possible that the users accidentally ran the script in their home directory, but that still wouldn't explain why the permissions where set to 000, not www/777.

    Read the article

  • Data recovery on working hard drive

    - by emgee
    So I have a 5 bay hot swap SATA enclosure that's connected to a Silicon Image-based SATA adapter in a computer. It's running XP Pro. There are two 1.5TB hard drives in slots 1 and 2 respectively, set up using RAID 1 using the the Silicon Image utility. There are also two 1TB drives in bays 3 and 4, also set to RAID 1 the same way. The partitions for both RAID arrays are Dynamic partitions. A few days back, there was a bare hard drive that needed some files copied off of, so it was popped it in bay 5, that bay to pass-through, and the copied data off of it. Later, I noticed that my 1.5TB drives no longer showed up in windows. In the Silicon Image utility, the drives showed up fine, no error. However, in Device Manager, it shows the RAID 1 array as uninitialized. It shows up as the right size, etc., but nothing else. There's no sign of anything wrong with either drive, so I'm not sure what happened exactly. I'm not the only one who has access to that computer, so it is possible there is something else done to it that I don't know of. There's quite a lot of data on it still, and if at all possible, I'd prefer to not send it to Ontrack. Does anyone know of software that would restore the partitions, keeping in mind that it's a Windows LDM partition? I have access to a variety of Operating Systems, so something that would work on Mac, Windows or Linux would be acceptable. The programs I usually use are not compatible with LDM.

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • Verizon overbilling me ?

    - by bek
    I have verizon air card for my internet. Which is called vz manager. I have had it for about a year. Keeping within my usage allowance the whole time until 3 months ago, when my bill came in my 5,000 g allowance was ran to 18,000. It has continued to do so through the last 3 months. My usage reset itself yesterday for the new month. I got on google for about 5 minutes and chatted for about an hour yesterday online. Mind you that I have never done anything any different then i do everyday. I do not download music or watch videos on my computer, nothing like that. Well since last night a 8 pm my usage sayd 1109.525 gb. ALREADY! For being on the internet for an hour with no downloads. What could be causing this, please thrown me some ideas. Verizon is checking on the problem, but that usually doesnt get me the answer i want. Can someone be hacking my card and using internet through it, has told me that that is not possible, however I think with the internet anytihng is possible.

    Read the article

  • Accessing the partitions on an VLM volume

    - by projix
    Suppose you have an LVM volume /dev/vg0/mylv. You have presented this as a virtual disk to a virtualised or emulated guest system. During installation the guest system sees it as /dev/sda and partitions it into /dev/sda{1,2,5,6} and completes the installation. Now at some point you need to access those filesystems from within the host system, without running the guest system. fdisk sees these partitions just fine: # fdisk -l /dev/vg0/mylv Device Boot Start End Blocks Id System /dev/vg0/mylv1 2048 684031 340992 83 Linux /dev/vg0/mylv2 686078 20969471 10141697 5 Extended /dev/vg0/mylv5 686080 8290303 3802112 83 Linux /dev/vg0/mylv6 8292352 11980799 1844224 83 Linux However, the devices such as /dev/vg0/mylv1 do not actually exist. I guess that because they're within an LV, the OS does not recognise this type of nesting by default. Is there any way I can prod Linux so that /dev/vg0/mylv1 or equivalent appears and thus becomes mountable within the host system? I understand that it's possible with qemu-nbd, and will use this if necessary. However, I was hoping for something more direct if possible, rather than simulating a network block device and attaching that.

    Read the article

  • How to provide users with isolated drive letters in Windows 2008 R2 (Terminal Server) [migrated]

    - by Pierre
    I need to be able to host several RDP sessions on a Terminal Server, where users of group A see a drive X: mapped to a given folder of the server and another group B see the same drive letter X: mapped to another folder. For instance : User 1, Group A X: --> C:\data\A User 2, Group A X: --> C:\data\A User 3, Group B X: --> C:\data\B User 4, Group C X: --> C:\data\C Is this possible. If so, how do I configure the virtual drive mapping so that the user has nothing special to do; i.e. I want the letter X: to be available to Remote Apps launched by the user, or if the user logs in to the remote desktop. Can I somehow use subst to get this to work? I would like to avoid, if possible, mounting drive letters on local shares (i.e. I don't like the idea of having to go through \\localhost\data-A to reach the user's data).

    Read the article

  • Show (copy) data at "X" time and stop update

    - by Anka
    I have two sheets. In the first sheet, cell F4, I have 00:00:00 (countdown). G9, G10 and G11 are cells that receive live data (decimal numbers). In the second sheet, I have three cells linked from sheet1, G9 ='Sheet1'!G9, G10 ='Sheet1'!G10, G11 ='Sheet1'!G11 (which update themselves when data is modified in the first sheet). Now I want to set in sheet 2, (assume) cells B9, B10 and B11 to show me (copy) the values from G9, G10 and G11 from sheet 1 when the countdown was 00:00:05 (5 seconds before Start) and not update again if the data changes in the cell it pulled the data from. Like G9 ='Sheet1'!G9 at 00:00:05 and stop here, do not update anything. OK? I can do a part, but the real problem is: I can not make it stop cells to update. Stand frozen, freeze, not move, calm .. however. I do not want to seem pretentious (but my knowledge in excel is limited), the most appropriate would be a formula, not macro or VBA, if possible. I want to post a picture but I can not because of my restrictions. Well, if this is not possible with a formula is just fine with (not really) VBA.

    Read the article

  • VzAccess Manager.

    - by bek
    I have verizon air card for my internet. Which is called vz manager. I have had it for about a year. Keeping within my usage allowance the whole time until 3 months ago, when my bill came in my 5,000 g allowance was ran to 18,000. It has continued to do so through the last 3 months. My usage reset itself yesterday for the new month. I got on google for about 5 minutes and chatted for about an hour yesterday online. Mind you that I have never done anything any different then i do everyday. I do not download music or watch videos on my computer, nothing like that. Well since last night a 8 pm my usage sayd 1109.525 gb. ALREADY! For being on the internet for an hour with no downloads. What could be causing this, please thrown me some ideas. Verizon is checking on the problem, but that usually doesnt get me the answer i want. Can someone be hacking my card and using internet through it, has told me that that is not possible, however I think with the internet anytihng is possible.

    Read the article

  • Handling inheritance with overriding efficiently

    - by Fyodor Soikin
    I have the following two data structures. First, a list of properties applied to object triples: Object1 Object2 Object3 Property Value O1 O2 O3 P1 "abc" O1 O2 O3 P2 "xyz" O1 O3 O4 P1 "123" O2 O4 O5 P1 "098" Second, an inheritance tree: O1 O2 O4 O3 O5 Or viewed as a relation: Object Parent O2 O1 O4 O2 O3 O1 O5 O3 O1 null The semantics of this being that O2 inherits properties from O1; O4 - from O2 and O1; O3 - from O1; and O5 - from O3 and O1, in that order of precedence. NOTE 1: I have an efficient way to select all children or all parents of a given object. This is currently implemented with left and right indexes, but hierarchyid could also work. This does not seem important right now. NOTE 2: I have tiggers in place that make sure that the "Object" column always contains all possible objects, even when they do not really have to be there (i.e. have no parent or children defined). This makes it possible to use inner joins rather than severely less effiecient outer joins. The objective is: Given a pair of (Property, Value), return all object triples that have that property with that value either defined explicitly or inherited from a parent. NOTE 1: An object triple (X,Y,Z) is considered a "parent" of triple (A,B,C) when it is true that either X = A or X is a parent of A, and the same is true for (Y,B) and (Z,C). NOTE 2: A property defined on a closer parent "overrides" the same property defined on a more distant parent. NOTE 3: When (A,B,C) has two parents - (X1,Y1,Z1) and (X2,Y2,Z2), then (X1,Y1,Z1) is considered a "closer" parent when: (a) X2 is a parent of X1, or (b) X2 = X1 and Y2 is a parent of Y1, or (c) X2 = X1 and Y2 = Y1 and Z2 is a parent of Z1 In other words, the "closeness" in ancestry for triples is defined based on the first components of the triples first, then on the second components, then on the third components. This rule establishes an unambigous partial order for triples in terms of ancestry. For example, given the pair of (P1, "abc"), the result set of triples will be: O1, O2, O3 -- Defined explicitly O1, O2, O5 -- Because O5 inherits from O3 O1, O4, O3 -- Because O4 inherits from O2 O1, O4, O5 -- Because O4 inherits from O2 and O5 inherits from O3 O2, O2, O3 -- Because O2 inherits from O1 O2, O2, O5 -- Because O2 inherits from O1 and O5 inherits from O3 O2, O4, O3 -- Because O2 inherits from O1 and O4 inherits from O2 O3, O2, O3 -- Because O3 inherits from O1 O3, O2, O5 -- Because O3 inherits from O1 and O5 inherits from O3 O3, O4, O3 -- Because O3 inherits from O1 and O4 inherits from O2 O3, O4, O5 -- Because O3 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 O4, O2, O3 -- Because O4 inherits from O1 O4, O2, O5 -- Because O4 inherits from O1 and O5 inherits from O3 O4, O4, O3 -- Because O4 inherits from O1 and O4 inherits from O2 O5, O2, O3 -- Because O5 inherits from O1 O5, O2, O5 -- Because O5 inherits from O1 and O5 inherits from O3 O5, O4, O3 -- Because O5 inherits from O1 and O4 inherits from O2 O5, O4, O5 -- Because O5 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 Note that the triple (O2, O4, O5) is absent from this list. This is because property P1 is defined explicitly for the triple (O2, O4, O5) and this prevents that triple from inheriting that property from (O1, O2, O3). Also note that the triple (O4, O4, O5) is also absent. This is because that triple inherits its value of P1="098" from (O2, O4, O5), because it is a closer parent than (O1, O2, O3). The straightforward way to do it is the following. First, for every triple that a property is defined on, select all possible child triples: select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value from TriplesAndProperties tp -- Select corresponding objects of the triple inner join Objects as Objects1 on Objects1.Id = tp.O1 inner join Objects as Objects2 on Objects2.Id = tp.O2 inner join Objects as Objects3 on Objects3.Id = tp.O3 -- Then add all possible children of all those objects inner join Objects as Children1 on Objects1.Id [isparentof] Children1.Id inner join Objects as Children2 on Objects2.Id [isparentof] Children2.Id inner join Objects as Children3 on Objects3.Id [isparentof] Children3.Id But this is not the whole story: if some triple inherits the same property from several parents, this query will yield conflicting results. Therefore, second step is to select just one of those conflicting results: select * from ( select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value, row_number() over( partition by Children1.Id, Children2.Id, Children3.Id, tp.Property order by Objects1.[depthInTheTree] descending, Objects2.[depthInTheTree] descending, Objects3.[depthInTheTree] descending ) as InheritancePriority from ... (see above) ) where InheritancePriority = 1 The window function row_number() over( ... ) does the following: for every unique combination of objects triple and property, it sorts all values by the ancestral distance from the triple to the parents that the value is inherited from, and then I only select the very first of the resulting list of values. A similar effect can be achieved with a GROUP BY and ORDER BY statements, but I just find the window function semantically cleaner (the execution plans they yield are identical). The point is, I need to select the closest of contributing ancestors, and for that I need to group and then sort within the group. And finally, now I can simply filter the result set by Property and Value. This scheme works. Very reliably and predictably. It has proven to be very powerful for the business task it implements. The only trouble is, it is awfuly slow. One might point out the join of seven tables might be slowing things down, but that is actually not the bottleneck. According to the actual execution plan I'm getting from the SQL Management Studio (as well as SQL Profiler), the bottleneck is the sorting. The problem is, in order to satisfy my window function, the server has to sort by Children1.Id, Children2.Id, Children3.Id, tp.Property, Parents1.[depthInTheTree] descending, Parents2.[depthInTheTree] descending, Parents3.[depthInTheTree] descending, and there can be no indexes it can use, because the values come from a cross join of several tables. EDIT: Per Michael Buen's suggestion (thank you, Michael), I have posted the whole puzzle to sqlfiddle here. One can see in the execution plan that the Sort operation accounts for 32% of the whole query, and that is going to grow with the number of total rows, because all the other operations use indexes. Usually in such cases I would use an indexed view, but not in this case, because indexed views cannot contain self-joins, of which there are six. The only way that I can think of so far is to create six copies of the Objects table and then use them for the joins, thus enabling an indexed view. Did the time come that I shall be reduced to that kind of hacks? The despair sets in.

    Read the article

  • Trouble recovering MySQL InnoDB database after server crash

    - by Andy Shinn
    I had a server crash due to a broken iSCSI link (the filesystem went into read-only mode). After repairing the link and rebooting the machine (CentOS 5 / MySQL 5.1), the MySQL server would not start and gave the following error: 100603 19:11:46 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100603 19:11:46 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Warning: database page corruption or a failed InnoDB: file read of page 112541. InnoDB: Trying to recover it from the doublewrite buffer. InnoDB: Dump of the page: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): lots of binary data 100603 19:11:46 InnoDB: Page checksum 953720272, prior-to-4.0.14-form checksum 2641912043 InnoDB: stored checksum 617821918, prior-to-4.0.14-form stored checksum 2080617765 InnoDB: Page lsn 115 2632899642, low 4 bytes of lsn at page end 2641594600 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Dump of corresponding page in doublewrite buffer: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): more binary data 100603 19:11:46 InnoDB: Page checksum 908374788, prior-to-4.0.14-form checksum 824841363 InnoDB: stored checksum 912869634, prior-to-4.0.14-form stored checksum 2210927931 InnoDB: Page lsn 115 2635312169, low 4 bytes of lsn at page end 2633173354 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Also the page in the doublewrite buffer is corrupt. InnoDB: Cannot continue operation. InnoDB: You can try to recover the database with the my.cnf InnoDB: option: InnoDB: innodb_force_recovery=6 100603 19:11:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended Per the error message, I have tried setting set-variable=innodb_force_recovery=6 in the my.cnf to get to the data. This allows the MySQL server to start. But when I try to do a mysqldump of the database or a SELECT * INTO OUTFILE "filename" FROM broken_table; it seems to endlessly just export the same line over and over again. I have also tried http://code.google.com/p/innodb-tools/. But this tool fails with an error that 'blob' type is not supported. If I try to access the data using the PHP application it crashes MySQL: `100603 19:19:19 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=2 max_threads=151 threads_connected=2 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338317 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x15d33f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x453aff00 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x24) [0x874364] /usr/libexec/mysqld(handle_segfault+0x346) [0x5c9166] /lib64/libpthread.so.0 [0x3a6e40eb10] /usr/libexec/mysqld(rec_get_offsets_func+0x30) [0x7cc310] /usr/libexec/mysqld [0x7674d8] /usr/libexec/mysqld(btr_search_info_update_slow+0x638) [0x768d48] /usr/libexec/mysqld(btr_cur_search_to_nth_level+0xc7d) [0x75f86d] /usr/libexec/mysqld [0x7dd1c1] /usr/libexec/mysqld(row_search_for_mysql+0x18b0) [0x7e03d0] /usr/libexec/mysqld(ha_innobase::general_fetch(unsigned char*, unsigned int, unsigned int)+0x7c) [0x7526fc] /usr/libexec/mysqld(handler::read_multi_range_next(st_key_multi_range**)+0x29) [0x6aed09] /usr/libexec/mysqld(QUICK_RANGE_SELECT::get_next()+0x194) [0x69a964] /usr/libexec/mysqld [0x6aafe9] /usr/libexec/mysqld(sub_select(JOIN*, st_join_table*, bool)+0x56) [0x635196] /usr/libexec/mysqld [0x63f9cd] /usr/libexec/mysqld(JOIN::exec()+0x950) [0x6497c0] /usr/libexec/mysqld(mysql_select(THD*, Item*, TABLE_LIST*, unsigned int, List&, Item*, unsigned int, st_order*, st_order*, Item*, st_order*, unsign ed long long, select_result*, st_select_lex_unit*, st_select_lex*)+0x17b) [0x64b34b] /usr/libexec/mysqld(handle_select(THD*, st_lex*, select_result*, unsigned long)+0x169) [0x64bc79] /usr/libexec/mysqld [0x5d34b6] /usr/libexec/mysqld(mysql_execute_command(THD*)+0x4e5) [0x5d6b45] /usr/libexec/mysqld(mysql_parse(THD*, char const*, unsigned int, char const)+0x211) [0x5dc321] /usr/libexec/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x10b8) [0x5dd3f8] /usr/libexec/mysqld(do_command(THD*)+0xe6) [0x5dd9e6] /usr/libexec/mysqld(handle_one_connection+0x73d) [0x5d036d] /lib64/libpthread.so.0 [0x3a6e40673d] /lib64/libc.so.6(clone+0x6d) [0x3a6d4d3d1d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at 0x15de5e0 = SELECT DISTINCT count(DISTINCT i.itemid) as rowscount,i.hostid FROM items i WHERE ((i.itemid BETWEEN 000000000000000 AND 0999999 99999999)) AND i.type<9 AND (i.hostid IN (10017,10047,10050,10054,10056,10059,10062,10063,10064,10065,10066,10067,10068,10069,10070,10071,10072,10073,100 74,10075,10076,10077,10078,10079,10080,10081,10082,10084,10088,10089,10090,10091,10092,10093,10094,10095,10096,10097,10098,10099,10100,10101,10102,10103,10 104,10105,10106,10107,10108,10109)) GROUP BY i.hostid thd-thread_id=3 thd-killed=NOT_KILLED The manual page at contains information that should help you find out what is causing the crash. 100603 19:19:19 mysqld_safe Number of processes running now: 0 100603 19:19:19 mysqld_safe mysqld restarted` Before recovering form an older backup as a last resort I am looking for anymore suggestions. Thanks!

    Read the article

  • FreeBSD high load loopback interface

    - by user1740915
    I have a problem with a FreeBSD server. There is a FreeBSD 9.0 amd64, two network cards em1 (internet), em0 (local network) configured firewall ipfw, natd, squid (not transparent), the server acts as a gateway for access to the Internet. Next problem: upload via squid is very low. At this moment I see next: natd, dhcpd load the cpu at that time when uploading through squid and there are a lot of traffic through the loopback interface. ipfw show output 0100 655389684 36707144666 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 00400 0 0 deny ip from any to ::1 00500 0 0 deny ip from ::1 to any 00600 4 292 allow ipv6-icmp from :: to ff02::/16 00700 0 0 allow ipv6-icmp from fe80::/10 to fe80::/10 00800 1 76 allow ipv6-icmp from fe80::/10 to ff02::/16 00900 0 0 allow ipv6-icmp from any to any ip6 icmp6types 1 01000 0 0 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136 01100 1615 76160 deny ip from 192.168.1.1 to any in via em1 01200 0 0 deny ip from 199.69.99.11 to any in via em0 01300 46652 3705426 deny ip from any to 172.16.0.0/12 via em1 01400 3936404 345618870 deny ip from any to 192.168.0.0/16 via em1 01500 4 336 deny ip from any to 0.0.0.0/8 via em1 01600 4129 387621 deny ip from any to 169.254.0.0/16 via em1 01700 0 0 deny ip from any to 192.0.2.0/24 via em1 01800 917566 33777571 deny ip from any to 224.0.0.0/4 via em1 01900 147872 22029252 deny ip from any to 240.0.0.0/4 via em1 02000 1132194739 1190981955947 divert 8668 ip4 from any to any via em1 02100 3 248 deny ip from 172.16.0.0/12 to any via em1 02200 35925 2281289 deny ip from 192.168.0.0/16 to any via em1 02300 1808 122494 deny ip from 0.0.0.0/8 to any via em1 02400 3 174 deny ip from 169.254.0.0/16 to any via em1 02500 0 0 deny ip from 192.0.2.0/24 to any via em1 02600 0 0 deny ip from 224.0.0.0/4 to any via em1 02700 0 0 deny ip from 240.0.0.0/4 to any via em1 02800 960156249 1095316736582 allow tcp from any to any established 02900 64236062 8243196577 allow ip from any to any frag 03000 34 1756 allow tcp from any to me dst-port 25 setup 03100 193 11580 allow tcp from any to me dst-port 53 setup 03200 63 4222 allow udp from any to me dst-port 53 03300 64 8350 allow udp from me 53 to any 03400 417 24140 allow tcp from any to me dst-port 80 setup 03500 211 10472 allow ip from any to me dst-port 3389 setup 05300 77 4488 allow ip from any to me dst-port 1723 setup 05400 3 156 allow ip from any to me dst-port 8443 setup 05500 9882 590596 allow tcp from any to me dst-port 22 setup 05600 1 60 allow ip from any to me dst-port 2000 setup 05700 0 0 allow ip from any to me dst-port 2201 setup 07400 4241779 216690096 deny log logamount 1000 ip4 from any to any in via em1 setup proto tcp 07500 21135656 1048824936 allow tcp from any to any setup 07600 474447 35298081 allow udp from me to any dst-port 53 keep-state 07700 532 40612 allow udp from me to any dst-port 123 keep-state 65535 1990638432 1122305322718 allow ip from any to any systat -ifstat when uploading via squid Load Average ||| Interface Traffic Peak Total tun0 in 79.507 KB/s 232.479 KB/s 42.314 GB out 2.022 MB/s 2.424 MB/s 59.662 GB lo0 in 4.450 MB/s 4.450 MB/s 43.723 GB out 4.450 MB/s 4.450 MB/s 43.723 GB em1 in 2.629 MB/s 2.982 MB/s 464.533 GB out 2.493 MB/s 2.875 MB/s 484.673 GB em0 in 240.458 KB/s 296.941 KB/s 442.368 GB out 512.508 KB/s 850.857 KB/s 416.122 GB top output PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 66885 root 1 92 0 26672K 2784K CPU3 3 528:43 65.48% natd 9160 dhcpd 1 45 0 31032K 9280K CPU1 1 7:40 32.96% dhcpd 66455 root 1 20 0 18344K 2856K select 1 119:27 1.37% openvpn 16043 squid 1 20 0 44404K 17884K kqread 2 0:22 0.29% squid squid.conf cat /usr/local/etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 192.168.1.1:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/squid/cache 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/squid/cache I understand that the traffic passes through the SQUID several times. But can not find why.

    Read the article

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • Cryptographic Validation Explained

    - by MarkPearl
    We have been using LogicNP’s CryptoLicensing for some of our software and I was battling to understand how exactly the whole process worked. I was sent the following document which really helped explain it – so if you ever use the same tool it is well worth a read. Licensing Basics LogicNP CryptoLicensing For .Net is the most advanced and state-of-the art licensing and copy protection system you can use for your software. LogicNP CryptoLicensing System uses the latest cryptographic technology to generate and validate licenses. The cryptographic algorithm used is the RSA algorithm which consists of a pair of keys called as the generation key and the validation key. Data encrypted using the generation key can only be decrypted using the corresponding validation key. How does cryptographic validation work? When a new license project is created, a unique validation-generation key pair is created for the project. When LogicNP CryptoLicensing For .Net generates licenses, it encrypts the license settings using the generation key. The validation key can be safely distributed with your software and is used during validation. During license validation, LogicNP CryptoLicensing For .Net attempts to decrypt the encrypted license code using the validation key. If the decryption is successful, this means that the data was encrypted using the generation key, since only the corresponding validation key can decrypt data encrypted with the generation key. This further means that not only is the license valid but that it was generated by you and only you since nobody else has access to the generation key. Generation Key This key is used by CryptoLicensing Generator to generate encrypted license codes. This key is stored in the license project file, so the license project file must be kept secure and confidential and must be accorded the same care as any other critical asset such as source code. Validation Key This key is used for validating generated license codes. It is the same key displayed in the 'Get Validation Key And Code' dialog (Ctrl+K) and is used by your software when validating license codes (using LogicNP.CryptoLicensing.dll). Unlike the generation key, it is not necessary to keep this key secure and confidential. Note that the generation key pair is stored in the project file created by LogicNP CryptoLicensing For .Net, so it is very important to backup this file and to keep it secure. Once the file is lost, it is not possible to retrieve the key pair. FAQ Do I use the same validation key to validate all license codes? Yes, the validation key (and generation key) for the project remains the same; you use the same key to validate all license codes generated using the project. You can retrieve the validation key using the "Project" menu --> "Get Validation Key & Code" menu item. Can license codes generated using generation key from one project be validated using validation key of another project? No! Q. Is every generated license code unique? A. Yes, every license code generated by CryptoLicensing is guaranteed to be unique, even if you generate thousands of codes at a time. Q. What makes CryptoLicensing so secure? A. CryptoLicensing uses the latest cryptographic technology to generate and validate licenses. The cryptographic algorithm used is the RSA asymmetric key algorithm which can use upto 3072-bit keys. Given current computing power, it takes years to break a 3072-bit key. Q. Is is possible for a hacker to develop a keygen for my software? A. Impossible. The cryptographic algorithm used by CryptoLicensing consists of a pair of keys called as the generation key and the validation key. Data encrypted with one key can only be decrypted by the other key and vice versa. Licenses are generated using the generation key and validated using the validation key. Without the generation key, it is impossible to generate valid licenses. Q. What is the difference between validation key and generation key? Generation Key This key is used by CryptoLicensing Generator to generate encrypted license codes. This key is stored in the license project file, so the license project file must be kept secure and confidential and must be accorded the same care as any other critical asset such as source code. Validation Key This key is used for validating generated license codes. It is the same key displayed in the 'Get Validation Key And Code' dialog (Ctrl+K) and is used by your software when validating license codes (using LogicNP.CryptoLicensing.dll). Unlike the generation key, it is not necessary to keep this key secure and confidential. Q. Do I have to include the license project file (.licproj) with my software? A. No!!! This goes against the very essence of the security of the asymmetric cryptographic scheme because the project file contains both the validation and generation key. With your software, you only need to include the validation key which will be used to validate licenses generated by CryptoLicensing using the generation key. The license project file should be treated as any other valuable and confidential asset such as your source code. Q. Does the license service need the license project file? A. Yes. The license project file is needed whenever new licenses are generated (via the UI, via the API or via the license service). As just one example, the license service generates new machine-locked licenses when activated licenses are presented to it for activation, therefore the license service needs the license project file. Q. Is it possible to embed my own data in the generated licenses? A. Yes. You can embed any amount of additional data in the licenses. This data will have the same amount of security as the license code itself and will be tamper-proof. The embedded user data can be retrieved from your software. Q. What additional steps can I take to ensure that my software does not get cracked? A. There are many methods and techniques which can make it extremely difficult for a hacker to crack your software. See Writing Effective License Checking Code And Designing Effective Licenses for more information. Q. Why is the license service not working? A. The most common cause is not setting the CryptoLicense.LicenseServiceURL property before trying to validate a license. Make sure that this property is set to the correct URL where your license service is hosted. The most common cause after this is that the license project file on the web server where your license service is hosted is not the latest. This happens if you make changes to the license project (for example, set the 'Enable With Serials' setting for a profile), but don't upload the updated project file to your web server. Q. Why are my serials not working? Serial codes require the user of a license service. See Using Serial Codes for more details. Also see the earlier question 'Why is the license service not working?' Q. Is the same validation key used to validate license codes generated from different profiles. A. Yes. Profiles are just pre specified license settings for quickly generating licenses having those settings. The actual license code is still generated using the license project's cryptographic generation key and thus, can be validated using the project's validation key. Q. Why are changes made to a profile not getting saved? A. Simply changing license settings via UI and saving the license project does not save those license settings to the active profile. You must first save the license settings to a profile using the Save/Save As command from the Profiles menu (see above). Q. Why is validation of activated licenses failing from CryptoLicensing Generator, but works from my software? A. Make sure that you have specified the URL of the license service using the Project Properties Dialog. Also see the earlier question 'Why is the license service not working?' Q. How can I extend the trial period of my customer? A. To extend the evaluation period of the customer, simply send him a new license code specifying the desired evaluation limits. Evaluation information such as the current used days, executions, etc are stored in garbled form in a registry location which is derived from the license code. Therefore, when a new license code is used, the old evaluation information will not be used and a new evaluation period will be started.

    Read the article

  • Building the Ultimate SharePoint 2010 Development Environment

    - by Manesh Karunakaran
    It’s been more than a month since SharePoint 2010 RTMed. And a lot of people have downloaded and set up their very own SharePoint 2010 development rigs. And quite a few people have written blogs about setting up good development environments, there is even an MSDN article on it. Two of the blogs worth noting are from MVPs Sahil Malik and Wictor Wilén. Make sure that you check these out as well. Part of the bad side-effects of being a geek is the need to do the technical stuff the best way possible (pragmatic or otherwise), but the problem with this is that what is considered “best” is relative. Precisely the reason why you are reading this post now. Most of the posts that I read are out dated/need updations or are using the wrong OS’es or virtualization solutions (again, opinions vary) or using them the wrong way. Here’s a developer’s view of Building the Ultimate SharePoint 2010 Development Rig. If you are a sales guy, it’s time to close this window. Confusion 1: Which Host Operating System and Virtualization Solution to use? This point has been beaten to death in numerous blog posts in the past, if you have time to invest, read this excellent post by our very own SharePoint Joel on this subject. But if you are planning to build the Ultimate Development Rig, then Windows Server 2008 R2 with Hyper-V is the option that you should be looking at. I have been using this as my primary OS for about 6-7 months now, and I haven’t had any Driver issue or Application compatibility issue. In my experience all the Windows 7 drivers work fine with WIN2008 R2 also. You can enable Aero for eye candy (and the Windows 7 look and feel) and except for a few things like the Hibernation support (which a can be enabled if you really want it), Windows Server 2008 R2, is the best Workstation OS that I have used till date. But frankly the answer to this question of which OS to use depends primarily on one question - Are you willing to change your primary OS? If the answer to that is ‘Yes’, then Windows 2008 R2 with Hyper-V is the best option, if not look at vmWare or VirtualBox, both are equally good. Those who are familiar with a Virtual PC background might prefer Sun VirtualBox. Besides, these provide support for running 64 bit guest machines on 32 bit hosts if the underlying hardware is truly 64 bit. See my earlier post on this. Since we are going to make the ultimate rig, we will use Windows Server 2008 R2 with Hyper-V, for reasons mentioned above. Confusion 2: Should I use a multi-(virtual) server set up? A lot of people use multiple servers for their development environments - like Wictor Wilén is suggesting - one server hosting the Active directory, one hosting SharePoint Server and another one for SQL Server. True, this mimics the production environment the best possible way, but as somebody who has fallen for this set up earlier, I can tell you that you don’t really get anything by doing this. Microsoft has done well to ensure that if you can do it on one machine, you can do it in a farm environment as well. Besides, when you run multiple Server class machine instances in parallel, there are a lot of unwanted processor cycles wasted for no good use. In my personal experience, as somebody who needs to switch between MOSS 2007/SharePoint 2010 environments from time to time, the best possible solution is to Make the host Windows Server 2008 R2 machine your Domain Controller (AD Server) Make all your Virtual Guest OS’es join this domain. Have each Individual Guest OS Image have it’s own local SQL Server instance. The advantages are that you can reuse the users and groups in each of the Guest operating systems, you can manage the users in one place, AD is light weight and doesn't take too much resources on your host machine and also having separate SQL instances for each of the Development images gives you maximum flexibility in terms of configuration, for example your SharePoint rigs can have simpler DB configurations, compared to your MS BI blast pits. Confusion 3: Which Operating System should I use to run SharePoint 2010 Now that’s a no brainer. Use Windows 2008 R2 as your Guest OS. When you are building the ultimate rig, why compromise? If you are planning to run Windows Server 2008 as your Guest OS, there are a few patches that you need to install at different times during the installation, for that follow the steps mentioned here Okay now that we have made our choices, let’s get to the interesting part of building the rig, Step 1: Prepare the host machine – Install Windows Server 2008 R2 Install Windows Server 2008 R2 on your best Desktop/Laptop. If you have read this far, I am quite sure that you are somebody who can install an OS on your own, so go ahead and do that. Make sure that you run the compatibility wizard before you go ahead and nuke your current OS. There are plenty of blogs telling you how to make a good Windows 2008 R2 Workstation that feels and behaves like a Windows 7 machine, follow one and once you are done, head to Step 2. Step 2: Configure the host machine as a Domain Controller Before we begin this, let me tell you, this step is completely optional, you don’t really need to do this, you can simply use the local users on the Guest machines instead, but if this is a much cleaner approach to manage users and groups if you run multiple guest operating systems.  This post neatly explains how to configure your Windows Server 2008 R2 host machine as a Domain Controller. Follow those simple steps and you are good to go. If you are not able to get it to work, try this. Step 3: Prepare the guest machine – Install Windows Server 2008 R2 Open Hyper-V Manager Choose to Create a new Guest Operating system Allocate at least 2 GB of Memory to the Guest OS Choose the Windows 2008 R2 Installation Media Start the Virtual Machine to commence installation. Once the Installation is done, Activate the OS. Step 4: Make the Guest operating systems Join the Domain This step is quite simple, just follow these steps below, Fire up Hyper-V Manager, open your Guest OS Click on Start, and Right click on ‘Computer’ and choose ‘Properties’ On the window that pops-up, click on ‘Change Settings’ On the ‘System Properties’ Window that comes up, Click on the ‘Change’ button Now a window named ‘Computer Name/Domain Changes’ opens up, In the text box titled Domain, type in the Domain name from Step 2. Click Ok and windows will show you the welcome to domain message and ask you to restart the machine, click OK to restart. If the addition to domain fails, that means that you have not set up networking in Hyper-V for the Guest OS to communicate with the Host. To enable it, follow the steps I had mentioned in this post earlier. Step 5: Install SQL Server 2008 R2 on the Guest Machine SQL Server 2008 R2 gets installed with out hassle on Windows Server 2008 R2. SQL Server 2008 needs SP2 to work properly on WIN2008 R2. Also SQL Server 2008 R2 allows you to directly add PowerPivot support to SharePoint. Choose to install in SharePoint Integrated Mode in Reporting Server Configuration. Step 6: Install KB971831 and SharePoint 2010 Pre-requisites Now install the WCF Hotfix for Microsoft Windows (KB971831) from this location, and SharePoint 2010 Pre-requisites from the SP2010 Installation media. Step 7: Install and Configure SharePoint 2010 Install SharePoint 2010 from the installation media, after the installation is complete, you are prompted to start the SharePoint Products and Technologies Configuration Wizard. If you are using a local instance of Microsoft SQL Server 2008, install the Microsoft SQL Server 2008 KB 970315 x64 before starting the wizard. If your development environment uses a remote instance of Microsoft SQL Server 2008 or if it has a pre-existing installation of Microsoft SQL Server 2008 on which KB 970315 x64 has already been applied, this step is not necessary. With the wizard open, do the following: Install SQL Server 2008 KB 970315 x64. After the Microsoft SQL Server 2008 KB 970315 x64 installation is finished, complete the wizard. Alternatively, you can choose not to run the wizard by clearing the SharePoint Products and Technologies Configuration Wizard check box and closing the completed installation dialog box. Install SQL Server 2008 KB 970315 x64, and then manually start the SharePoint Products and Technologies Configuration Wizard by opening a Command Prompt window and executing the following command: C:\Program Files\Common Files\Microsoft Shared Debug\Web Server Extensions\14\BIN\psconfigui.exe The SharePoint Products and Technologies Configuration Wizard may fail if you are using a computer that is joined to a domain but that is not connected to a domain controller. Step 8: Install Visual Studio 2010 and SharePoint 2010 SDK Install Visual Studio 2010 Download and Install the Microsoft SharePoint 2010 SDK Step 9: Install PowerPivot for SharePoint and Configure Reporting Services Pop-In the SQLServer 2008 R2 installation media once again and install PowerPivot for SharePoint. This will get added as another instance named POWERPIVOT. Configure Reporting Services by following the steps mentioned here, if you need to get down to the details on how the integration between SharePoint 2010 and SQL Server 2008 R2 works, see Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010 an excellent article by Alan Le Marquand Step 10: Download and Install Sample Databases for Microsoft SQL Server 2008R2 SharePoint 2010 comes with a lot of cool stuff like PerformancePoint Services and BCS, if you need to try these out, you need to have data in your databases. So if you want to save yourself the trouble of creating sample data for your PerformancePoint and BCS experiments, download and install Sample Databases for Microsoft SQL Server 2008R2 from CodePlex. And you are done! Fire up your Visual Studio 2010 and Start Coding away!!

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >