Search Results

Search found 39497 results on 1580 pages for 'two shoes'.

Page 205/1580 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Dropbox’s Great Space Race Delivers Additional Space to Students

    - by Jason Fitzpatrick
    If you’re a student or faculty member (or still have an active .edu email account) now’s the time to cash in on some free cloud storage courtesy of Dropbox’s Great Space Race. Just by linking your .edu address with your Dropbox account you’ll get an extra 3GB of storage for the next two years. The more people from your school that sign up, the higher the total climbs–up to an extra 25GB for two years. The Space Race lasts for the next eight weeks, you can read more about the details here or just jump to the signup page at the link below. The Great Space Race [Dropbox] Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • Day 2 - Game Design Documentation

    - by dapostolov
    So yesterday I didn't cut any code for my game but I was able to do a tiny bit of research on the XNA Game Development Technology and the communities out there and do you know what? I feel I'm a bit closer to my goal. The bad news is today I didn't cut code either. However, not all is lost because I wanted to get my ideas on paper and today I just did that.  Today, I began to jot down notes about the game and how I felt the visual elements would interact with each other. Unlike my workplace, my personal level of documentation is nothing more than a task list or a mind map of my ideas; it helps me streamline my solutions quiet effectively and circumvent the long process of articulating each thought to the n-th degree. I truly dislike documentation (because I have an extremely hard time articulating my thought and solutions); however, because I tend to do a really good job with documentation I tend to get stuck writing the buggers. But as a generalist remark: 'No Developer likes documentation.' For now let's stick with my basic notes and call this post a living document. Here are my notes, fresh, from after watching the new first episode of Merlin second season! Actually, a quick recommendation to anyone who is reading this (if anyone is): I truly recommend you envelope yourself in the medium or task you're trying to tackle. Be one with moment and feel it! For instance: Are you writing a fantasy script / game? What would the music of the genre sound like? For me the Conan the Barbarian soundtrack by Basil Poledouris is frackin awesome. There are many other good CD's out there, which I listen to (some who even use medival instruments, but Conan I keep returning to. It's a creative trigger for me. Ask yourself what would the imagery look like? Time to surf google for artist renditions of fantasy! What would the game feel like? Start playing some of your favorite games that inspire you, be wary though, have some self control and don't let it absorb your time. Anyhow, onto the documentation... Screens, Scenes, and Sprites. Oh My! (groan...) The first thing that came to mind were the screens, I thought the following would suffice: Menu Screen Character Customisation Screen Loading Screen? Battle Ground The Menu Screen Ok. So, the thought here is when the game loads a huge title is displayed: Wizard Wars. The player is prompted with 3 menu items: 1 Player Game, 2 Player Game, and Exit. Since I'm targetting the PC platform, as a non-networked game to start, I picture myself running my mouse over each menu option and the visual element of the menu item changes, along with a sound to indicate that I am over a curent menu item. And as I move my mouse away, it changes back, and possibly an exit mouse sound. Maybe on the screen somewhere is a brazier alit with a magical tome open right beside it, OR, maybe the tome is the menu! I hear the menu music as mellow, not obtrusive or piercing. On a menu item select, a confirmation sound bellows to indicate the players selection. The Esc key will always return me to the previous screens or desktop. The menu screen must feel...dark, like a really important ritual is about to happen and thus the music should build up. 1 Player Game - > Customize Character(s) 2 Player Game - > Customize Character(s) Exit - > Back to Windows Notes: So the first thing I pick up here are a couple things: First and foremost, my artistic abilities suck crap, so I may have to hire an artist (now that i've said that, lets get techy) graphical objects will be positioned within a scene on each screen / window. Menu items will be represented grapically, possibly animated, and have sound / animation effects triggered by user input or a time line. I have an animated scene involving a brazier or fire on a stick IF I was to move this game to the xbox, I'd have to track which menu item is currently selected (unless I do a mouse pointer type thing.) WindowObject has a scene A Scene has many GameObjects GameObject has a position graphic or animation MenuObject is a GameObject which has a mouse in, mouse out, and click event which either does something graphically (animation), does something with sound, or moves to another screen.  Character Customisation Screen With either the 1 or 2 player option selected, both selections will come to this screen; a wizard requires a name, powers, and vestements of course! Player one will configure his character first and then player two. I considered a split screen for PC but to have two people fighting over a keyboard would probably suck. For XBox, a split screen could work; maybe when I get into the networking portion (phase 2 blog?) of this game I will remove the 2 player option for PC and provide only multiplayer and I will leave 2 player for xbox...hmm... Anyhow...I picture the creation process as follows: Name: (textbox / keyboard entry) - for xbox, this would have to be different. Robe Color: (color box, or something) Stats: Speed, Oomph, and Health. (as sliders) 1 as minimum and 10 as maximum. Ok, Back, and Cancel buttons / options. Each stat has a benefit which are listed below. The idea is the player decides if he wants his wizard to run fast, be a tank and ... hit with a purse.Regardless, the player will have a pool of 12 points to use. Ideally, A balanced wizard will have 5 in each attribute. Spells? The only spell of choice is a ball of fire which comes without question. The music and screen should still feel like a ritual. The Character Speed Basically, how fast your character moves and casts. Oomph (Best Monster Truck Voice): PURE POWAH!!! The damage output of your fireball. Health How much damage you can take. Notes: I realise the game dynamics may sound uninteresting at the moment; but I think after a couple releases, we could have some other grand ideas such as: saved profiles, gold to upgrade arsenal of spells, talents, etc...but for now...a vanilla fireball thrower mage will suffice for this experiment. OK. So... a MenuObject  may need to be loosely coupled to allow future items such as networking? may be a button? a CharacterObject has a name speed oomph health and a funky robe color. cap on the three stats (1-10) an arsenal of 1 spell (possibly could expand this) The Loading Screen As is. The Battleground Screen For now, I'm keeping the screen as max resolution for the PC. The screen isn't going to move or even be a split screen. I'm not aiming high here because I want to see what level of change is involved when new features / concepts are added to game content. I'm interested to find out if we could apply techniques such as MVC or MVVM to this type of development or is it too tightly coupled? This reminds me when when my best friend and I were brainstorming our game idea (this is going back a while...1994, 6?) and he cringed at the thought of bringing business technology into games, especially when I suggested a database to store character information and COM / DCOM as the medium, but it seems I wasn't far off (reflecting); just like his implementation of a xml "config file" for dynamic direct-x menus back before .net in 1999...anyhow...i digress... The Battle One screen, two characters lobing balls of fire at each other...It doesn't get better than that. Every so often a scroll appears...and the fireballs bounce off walls, or the wizard has rapid fire, or even scrolls of healing! The scroll options are endless. Two bars at the top, each the color of the wizard (with their name beside the bar) indicate how much health they have. Possibly the appearance of the scrolls means the battle is taking too long? I'm thinking 1 player controls: up, down, left, right and space to fire the button. Or even possibly, mouse click and shift - mouse button to fire a spell in the direction they are facing. Two player controls: a, s, d, f and space AND arrows (up, down, left, right) and Del key or Crtl. The game ends when a player has 0 health and a dialog box appears asking for a rematch / reconfigure / exit. Health goes down when a fireball (friendly or not), connects with a wizard. When a wizard connects with a scroll, a countdown clock / icon appears near the health bar and the wizard begins to glow. For the most part, a wizard can have only scroll 1 effect on him at a time. Notes: Ok, there's alot to cover here. a CharacterObject is a GameObject it travels at a set velocity it travels in a direction it has sounds (walking, running, casting, impact, dying, laughing, whistling, other?) it has animations (walking, running, casting, impact, dying, laughing, idle, other?) it has a lifespan (determined by health) it is alive or dead it has a position a ScrollObject is a GameObject it carries a transferance of points "damage" (or healing, bad scroll effect?) (determinde by caster) it carries a transferance of "other" it is stationary it has a sound on impact it has a stationary animation it has an impact animation / or transfers an impact animation it has a fade animation? it has a lifespan (determined by game) it is alive or dead it has a position a WallObject is a GameObject it has a sound on fireball impact? it is a still image / stationary it has an impact animation / or transfers an impact animation it is dead it has a position A FireBall is a GameObject it carries a transferance of poinst "damage" (or healing, bad scroll effect?) (determinde by caster) it travels at a set velocity it travels in a direction it has a sound it has a travel animation it has an impact animation / or transfers an impact animation it has a fade animation? it has a lifespan (determined by caster) it is alive or dead it has a position As I look at this, I can see some common attributes in each object that I can carry up to the GameObject. I think I'm going to end the documentation here, it's taken me a bit of time to type this all out, tomorrow. I'll load up my IDE and my paint studio to get some good old fashioned cowboy hacking going!   D.

    Read the article

  • Significance of Bresenhams Line of Sight algorithm

    - by GamDroid
    What is the significance of Bresenhams Line of Sight algorithm in chasing and evading in games? As far as i know and implemented this algorithm calulates the straight line between two given points. However while implementing it in game development i stored the points calculated using this algorithm in an array.Then im traversing this array for chasing and evading purpose. This looks to be working good with some angles only.In an pixel based environment/tile based. What if there are some obstacles added in the paths of the two points? then this algorithm will not work right? How well can we use the Bresenhams Line algorithm in game development?

    Read the article

  • Bootstrap responsive CSS [migrated]

    - by savolai
    I have a four column design and I am using Bootstrap. The design renders fine in a single column in mobile devices, but in "(min-width: 768px) and (max-width: 979px)", I get four columns though there is room for only two. So clearly, the rows/spans setup would need to be rethought for those sizes. The only way I can imagine of doing this is to have semantic CSS classes used in the HTML and only including grid classes in the CSS using LESS, and then depending on screen size, including different grid classes to achieve four or two column layout. Not sure if this would work either though. Is this the way to go with, or am I thinking this too complicatedly? Thanks! Also at: https://groups.google.com/forum/#!topic/twitter-bootstrap/R5jEp0oQ_-E

    Read the article

  • New Tuxedo White Papers

    - by todd.little
    As part of the Tuxedo 11gR1 release, I've written two new white papers on Tuxedo. One is called "Tuxedo in a SOA World" and discusses how Tuxedo fits into SOA based applications. It covers most of the various connectivity options from Tuxedo into SOA environments and gives guidance as to which connectivity options are best suited for a particular application requirement. The other white paper "SCA: Bringing Modern SOA Programing to Tuxedo" is of a more technical bent and focuses on using the SCA features in SALT to easily build SOA based applications on Tuxedo without using a lot of technical APIs. In fact, services built using SALT's SCA support don't require any technical APIs, just pure business logic, and SCA clients need at most a couple of API calls, simply to look up a service. You can find these two new white papers as well as some additional white papers at http://www.oracle.com/technology/products/tuxedo/index.html.

    Read the article

  • What does it mean when ARP shows <incomplete> on eth1

    - by Geoff Dalgas
    We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps.

    Read the article

  • Sun Ray 3 Plus Appliance Announced

    - by [email protected]
    There were many of you out there wondering if Oracle was going to keep and add to the Sun Ray and Sun virtualized desktop product suite, there have been a number of affirmative statements over the last many months. However, none of them resound like this; the introduction of a new product pretty much proves the point. A couple minutes before 3:00, local time yesterday, Oracle announced the release of a new Sun Ray, appliance, the Sun Ray 3 Plus. This is the unit that will replace the SR 2 FS (which has been for sale now since the middle of last decade).  Physically it is about the same size as the 2 FS but there are some significant differences... As you can see there is no smart card reader in the front - that has moved to the top to ensure only one hand is required to insert the card.  There is also a larger surround on the card reader that lights up to show the user the card is being read (properly).  A new power on/off switch is on the front which essentially brings power consumption to ~0 watts, but there is also a new 'sleep' timer looking for 30 minutes of inactivity and then will drop the power consumption down to ~ 1watt. There are also 2 USB 2.0 ports are accessible on the front instead of one.  The standard mic in and headphone out ports are there as well.  There is even more interesting stuff on the back. From the top down there are two more USB 2.0 ports for a total of four, but then the Oracle "Peripheral Kit" keyboard includes a 3-port USB Hub, too.  There's a 10/100/1000 Ethernet port as well as a 1000 Mb SFP port.  Standard DB-9 Serial port and then two DVI ports.  Then there is the really big news.  Two DVI ports driving 2560 x 1600 resolution, each. Most PCs can't do that without adding an adapter card.Now the images I have here are ones taken on a prototype a couple months back.  They are essentially the same as the Production unit, but if you would like to see an image of the Production Sun Ray 3 Plus unit you can see one here. There is a full data sheet available here. So this is the first Oracle Sun Ray desktop appliance.  Proof that the product line lives on.  A very good start!

    Read the article

  • My MSDN magazine articles are live

    - by Daniel Moth
    Five years ago I wrote my first MSDN magazine article, and 21 months later I wrote my second MSDN Magazine article (during the VS 2010 Beta). By my calculation, that makes it two and a half years without having written anything for my favorite developer magazine! So, I came back with a vengeance, and in this month's April issue of the MSDN magazine you can find two articles from yours truly - enjoy: A Code-Based Introduction to C++ AMP Introduction to Tiling in C++ AMP For more on C++ AMP, please remember that I blog about it on our team blog, and we take questions in our MSDN forum. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • linking Google AdWords account to Google Analytics account

    - by crmpicco
    I have a Google Analytics account that has two profiles, one for www.ayrshireminis.com and one for www.crmpicco.co.uk. I have a Google AdWords account that I would like to link to my Google Analytics account, but for some reason the Google AdWords admin is telling me I cannot do that. Within the AdWords admin and the My Account Linked Accounts Google Analytics section both profiles show as Not Available ... it also has this message... None of your profiles are available for linking due to your account settings. How can I link these two accounts?

    Read the article

  • Best development architecture for a small team of programmers

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with LAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with LAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards...

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • Extending ASP.NET Output Caching

    One of the most sure-fire ways to improve a web application's performance is to employ caching. Caching takes some expensive operation and stores its results in a quickly accessible location. Since it's inception, ASP.NET has offered two flavors of caching:<ul><li><b>Output Caching</b> - caches the entire rendered markup of an ASP.NET page or <a href="http://www.asp101.com/lessons/usercontrols.asp">User Control</a> for a specified duration.</li><li><b>Data Caching</b> - a API for caching objects. Using the data cache you can write code to add, remove, and retrieve items from the cache.</li></ul>Until recently, the underlying functionality of these two caching mechanisms was fixed - both cached data

    Read the article

  • Internet Protocol Suite: Transition Control Protocol (TCP) vs. User Datagram Protocol (UDP)

    How do we communicate over the Internet?  How is data transferred from one machine to another? These types of act ivies can only be done by using one of two Internet protocols currently. The collection of Internet Protocol consists of the Transition Control Protocol (TCP) and the User Datagram Protocol (UDP).  Both protocols are used to send data between two network end points, however they both have very distinct ways of transporting data from one endpoint to another. If transmission speed and reliability is the primary concern when trying to transfer data between two network endpoints then TCP is the proper choice. When a device attempts to send data to another endpoint using TCP it creates a direct connection between both devices until the transmission has completed. The direct connection between both devices ensures the reliability of the transmission due to the fact that no intermediate devices are needed to transfer the data. Due to the fact that both devices have to continuously poll the connection until transmission has completed increases the resources needed to perform the transmission. An example of this type of direct communication can be seen when a teacher tells a students to do their homework. The teacher is talking directly to the students in order to communicate that the homework needs to be done.  Students can then ask questions about the assignment to ensure that they have received the proper instructions for the assignment. UDP is a less resource intensive approach to sending data between to network endpoints. When a device uses UDP to send data across a network, the data is broken up and repackaged with the destination address. The sending device then releases the data packages to the network, but cannot ensure when or if the receiving device will actually get the data.  The sending device depends on other devices on the network to forward the data packages to the destination devices in order to complete the transmission. As you can tell this type of transmission is less resource intensive because not connection polling is needed,  but should not be used for transmitting data with speed or reliability requirements. This is due to the fact that the sending device can not ensure that the transmission is received.  An example of this type of communication can be seen when a teacher tells a student that they would like to speak with their parents. The teacher is relying on the student to complete the transmission to the parents, and the teacher has no guarantee that the student will actually inform the parents about the request. Both TCP and UPD are invaluable when attempting to send data across a network, but depending on the situation one protocol may be better than the other. Before deciding on which protocol to use an evaluation for transmission speed, reliability, latency, and overhead must be completed in order to define the best protocol for the situation.  

    Read the article

  • Crystal Reports: 5 Tests for Top Performance

    Your masterpiece report is now complete. It doesn't just meet your customer’s expectations, it blows them out of the water. All they want is a beautifully-summarized report that can be displayed in a myriad of ways. Then disaster strikes! You try to run the report for a month against the live database and not the two days worth of test data you used for development, then your report’s runtime goes from twenty seconds to two hours. Every Crystal Reports developer has experienced this situation and it can be one of the most frustrating aspects of report design. Thankfully there are a variety of things that can be done to combat bad performance, any one of which can reap huge benefits...

    Read the article

  • Nginx and PHP Fundamentals

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/08/01/nginx-and-php-fundamentals.aspxHot on the heels of my .NET caching course, I’ve had my first “fundamentals” course released on Pluralsight: Nginx and PHP Fundamentals. It’s a practical look at two of the biggest technologies on the web – Nginx, which is the fastest growing HTTP server around (currently hosting 100+ million sites), and PHP, which powers more websites than any other server-side framework (currently 240+ million sites). The two technologies work well together, both are open-source and cross-platform and both are lightweight and easy to get started with - you just need to download and unzip the runtimes, and with a text editor you can create and host dynamic websites. I’ve used PHP as a second (sometimes third) language since 2005 when I was brought cold into an established codebase to help improve performance, and Nginx to host tier 2 apps for the last couple of years. As with any training course, you learn new things as you produce it, and it was good to focus on a different stack from my commercial .NET world. In the course I start with a website in two parts – one which is just static content, and one which processes a user registration form using ASP.NET MVC, both running in IIS. Over four modules I migrate the app to Nginx and PHP: Hosting Static Content in Nginx – how to deploy and configure Nginx for a basic website; PHP Part 1: Basic Web Forms – installing PHP and an IDE, and building a simple form with server-side validation; PHP Part 2: Packages and Integration – using PECL and Composer for packages to connect to Azure, AWS, Mongo and reCAPTCHA; Hosting PHP in Nginx – configuring Nginx to host our PHP site. Along the way I run some performance stats with JMeter, and the headlines are that Nginx running on Linux outperforms IIS on Windows for static content,by 800 requests per second over 1000 concurrent requests; and Linux+Ngnix+PHP outperforms Windows+IIS+ASP.NET MVC by 700 request per second with the same load. Of course, the headline stats don’t tell the whole story, and when you add OpCode caching for PHP and the ASP.NET Output Cache, the results are very different. As Web architecture moves away from heavy server-side processing, to Single Page Apps with client-side frameworks like AngularJS and Knockout, I think there’s an increasing need for high-performance, low-cost server technologies, and the combination of Nginx and PHP makes a compelling case.

    Read the article

  • PHP pages working slow from time to time

    - by user1038179
    I have VPS with limit of 2GB of ram and 8 CPU cores. I have 5 sites on that VPS (one of them is just for testing, no visitors exept me). All 5 sites are image galleries, like wallpaper sites. Last week I noticed problem on one site (main domain, used for name servers, and also with most traffic, visitors). That site has two image galleries, one is old static html gallery made few years ago and another, main, is powered by ZENPhoto CMS. Also I have that same gallery CMS on another two sites on that same VPS (on one running site and on one just for testing site). On other two sites I have diferent PHP driven gallery. Problem is that after some time (it vary from 10 minutes to few hours after apache restart), loading of pages on main site becomes very slow, or I get 503 Service Temporarily Unavailable error. So pages becomes unavailable. But just that part with new CMS gallery, old part of site with static html pages are working fast and just fine. Also other two sites with same CMS gallery and other two with different PHP driven gallery are working fine and fast at the same time. I thought it must be something with CMS on that main site, because other sites are working nice. Then I tryed to open contact and guest book pages on that main site which are outside of that CMS but also PHP pages, and they do not load too, but that same contact php scipts are working on other sites at the same time. So, when site starts to hangs, ONLY PHP generated content is not working, like I said other static pages are working. And, ONLY on that one main site I have problems. Then I need to restart Apache, after restart everything is vorking nice and fast, for some time, than again, just PHP pages on main site are becomming slower. If I do not restart apache that slowness take some time (several minutes, hours, depending ot traffic) and during that time PHP diven content is loading very slow or unavailable on that site. After sime time, on moments everything start to work and is fast again for some time, and again. In hours with more traffic PHP content is loading slowly or it is unavailable, in hours with less traffic it is sometimes fast and sometimes little bit slower than usually. And ones again, only on that main site, and only PHP driven pages, static pages are working fast even in most traffic hours also other sites with even same CMS are working fast. Currently I have about 7000 unique visitors on that site but site worked nice even with 11500 visitors per day. And about 17000 in total visitors on VPS, all sites ( about 3 pages per unique visitor). When site start to slow down sometimes in apache status I can see something like this: mod_fcgid status: Total FastCGI processes: 37 Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 11300 39 28 7 Working 11274 47 28 7 Working 11296 40 29 3 Working 11283 45 30 3 Working 11304 36 31 1 Working 11282 46 32 3 Working 11292 42 33 1 Working 11289 44 34 1 Working 11305 35 35 0 Working 11273 48 36 2 Working 11280 47 39 1 Working 10125 133 40 12 Exiting(communication error) 11294 41 41 1 Exiting(communication error) 11277 47 42 2 Exiting(communication error) 11291 43 43 1 Exiting(communication error) 10187 108 43 10 Exiting(communication error) 10209 95 44 7 Exiting(communication error) 10171 113 44 5 Exiting(communication error) 11275 47 47 1 Exiting(communication error) 10144 125 48 8 Exiting(communication error) 10086 149 48 20 Exiting(communication error) 10212 94 49 5 Exiting(communication error) 10158 118 49 5 Exiting(communication error) 10169 114 50 4 Exiting(communication error) 10105 141 50 16 Exiting(communication error) 10094 146 50 15 Exiting(communication error) 10115 139 51 17 Exiting(communication error) 10213 93 51 9 Exiting(communication error) 10197 103 51 7 Exiting(communication error) Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7983 1079 2 149 Ready 7979 1079 11 151 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7990 1066 0 57 Ready 8001 1031 64 35 Ready 7999 1032 94 29 Ready 8000 1031 91 36 Ready 8002 1029 34 52 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7991 1064 29 115 Ready When it is working nicly there is no lines with "Exiting(communication error)" Active and Idle are time active and time since last request, in seconds. Here are system info. Sysem info: Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5440 @ 2.83GHz Speed 88.320 MHz Cache 6144 KB All other seven are the same. System Information Linux vps.nnnnnnnnnnnnnnnnn.nnn 2.6.18-028stab099.3 #1 SMP Wed Mar 7 15:20:22 MSK 2012 x86_64 x86_64 x86_64 GNU/Linux Current Memory Usage total used free shared buffers cached Mem: 8388608 882164 7506444 0 0 0 -/+ buffers/cache: 882164 7506444 Swap: 0 0 0 Total: 8388608 882164 7506444 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/vzfs 100G 34G 67G 34% / none System Details: Running on: Apache/2.2.22 System info: (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8e-fips-rhel5 DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_fcgid/2.3.6 Powered by: PHP/5.3.10 Current Configuration Default PHP Version (.php files) 5 PHP 5 Handler fcgi PHP 4 Handler suphp Apache suEXEC on Apache Ruid2 off PHP 4 Handler suphp Apache suEXEC on Apache Configuration The following settings have been saved: fileetag: All keepalive: On keepalivetimeout: 3 maxclients: 150 maxkeepaliverequests: 10 maxrequestsperchild: 10000 maxspareservers: 10 minspareservers: 5 root_options: ExecCGI, FollowSymLinks, Includes, IncludesNOEXEC, Indexes, MultiViews, SymLinksIfOwnerMatch serverlimit: 256 serversignature: Off servertokens: Full sslciphersuite: ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP:!kEDH startservers: 5 timeout: 30 I hope, I explained my problem nicely. Any help would be nice.

    Read the article

  • How do multipass shaders work in OpenGL?

    - by Boreal
    In Direct3D, multipass shaders are simple to use because you can literally define passes within a program. In OpenGL, it seems a bit more complex because it is possible to give a shader program as many vertex, geometry, and fragment shaders as you want. A popular example of a multipass shader is a toon shader. One pass does the actual cel-shading effect and the other creates the outline. If I have two vertex shaders, "cel.vert" and "outline.vert", and two fragment shaders, "cel.frag" and "outline.frag" (similar to the way you do it in HLSL), how can I combine them to create the full toon shader? I don't want you saying that a geometry shader can be used for this because I just want to know the theory behind multipass GLSL shaders ;)

    Read the article

  • SQLPeople Interviews - Jamie Thomson and Rob Farley

    - by andyleonard
    Introduction Late last year I announced an exciting new endeavor called SQLPeople . At the end of 2010 I announced the 2010 SQLPeople Person of the Year . Interviews I'm pleased to announce the first two interviews have been posted. They are with my friend and co-SSIS-professional Jamie Thomson and Rob Farley , someone I had the pleasure of meeting in person at the PASS Summit 2010. I plan to post two or three interviews each week for the forseeable future. Conclusion SQLPeople is just one of the...(read more)

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Change Tracking

    - by Ricardo Peres
    You may recall my last post on Change Data Control. This time I am going to talk about other option for tracking changes to tables on SQL Server: Change Tracking. The main differences between the two are: Change Tracking works with SQL Server 2008 Express Change Tracking does not require SQL Server Agent to be running Change Tracking does not keep the old values in case of an UPDATE or DELETE Change Data Capture uses an asynchronous process, so there is no overhead on each operation Change Data Capture requires more storage and processing Here's some code that illustrates it's usage: -- for demonstrative purposes, table Post of database Blog only contains two columns, PostId and Title -- enable change tracking for database Blog, for 2 days ALTER DATABASE Blog SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); -- enable change tracking for table Post ALTER TABLE Post ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON); -- see current records on table Post SELECT * FROM Post SELECT * FROM sys.sysobjects WHERE name = 'Post' SELECT * FROM sys.sysdatabases WHERE name = 'Blog' -- confirm that table Post and database Blog are being change tracked SELECT * FROM sys.change_tracking_tables SELECT * FROM sys.change_tracking_databases -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- update post UPDATE Post SET Title = 'First Post Title Changed' WHERE Title = 'First Post Title'; -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- see changes since version 0 (initial) SELECT p.Title, c.PostId, SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT FROM CHANGETABLE(CHANGES Post, 0) AS c LEFT OUTER JOIN Post AS p ON p.PostId = c.PostId; -- is column Title of table Post changed since version 0? SELECT CHANGE_TRACKING_IS_COLUMN_IN_MASK(COLUMNPROPERTY(OBJECT_ID('Post'), 'Title', 'ColumnId'), (SELECT SYS_CHANGE_COLUMNS FROM CHANGETABLE(CHANGES Post, 0) AS c)) -- get current version SELECT CHANGE_TRACKING_CURRENT_VERSION() -- disable change tracking for table Post ALTER TABLE Post DISABLE CHANGE_TRACKING; -- disable change tracking for database Blog ALTER DATABASE Blog SET CHANGE_TRACKING = OFF; You can read about the differences between the two options here. Choose the one that best suits your needs! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >