Search Results

Search found 4781 results on 192 pages for 'red planet'.

Page 19/192 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to Automate your Database Documentation

    - by Jonathan Hickford
    In my previous post, “Automating Deployments with SQL Compare command line” I looked at how teams can automate the deployment and post deployment validation of SQL Server databases using the command line versions of Red Gate tools. In this post I’m looking at another use for the command line tools, namely using them to generate up-to-date documentation with every database change. There are many reasons why up-to-date documentation is valuable. For example when somebody new has to work on or administer a database for the first time, or when a new database comes into service. Having database documentation reduces the risks of making incorrect decisions when making changes. Documentation is very useful to business intelligence analysts when writing reports, for example in SSRS. There are a couple of great examples talking about why up to date documentation is valuable on this site:  Database Documentation – Lands of Trolls: Why and How? and Database Documentation Using SQL Doc. The short answer is that it can save you time and reduce risk when you need that most! SQL Doc is a fast simple tool that automatically generates database documentation. It can create documents in HTML, Word or pdf files. The documentation contains information about object definitions and dependencies, along with any other information you want to associate with each object. The SQL Doc GUI, which is included in Red Gate’s SQL Developer Bundle and SQL Toolbelt, allows you to add additional notes to objects, and customise which objects are shown in the docs.  These settings can be saved as a .sqldoc project file. The SQL Doc command line can use this project file to automatically update the documentation every time the database is changed, ensuring that documentation that is always up to date. The simplest way to keep documentation up to date is probably to use a scheduled task to run a script every day. However if you have a source controlled database, or are using a Continuous Integration (CI) server or a build server, it may make more sense to use that instead. If  you’re using SQL Source Control or SSDT Database Projects to help version control your database, you can automatically update the documentation after each change is made to the source control repository that contains your database. To get this automation in place,  you can use the functionality of a Continuous Integration (CI) server, which can trigger commands to run when a source control repository has changed. A CI server will also capture and save the documentation that is created as an artifact, so you can always find the exact documentation for a specific version of the database. This forms an always up to date data dictionary. If you don’t already have a CI server in place there are several you can use, such as the free open source Jenkins or the free starter editions of TeamCity. I won’t cover setting these up in this article, but there is information about using CI servers for automating database tasks on the Red Gate Database Delivery webpage. You may be interested in Red Gate’s SQL CI utility (part of the SQL Automation Pack) which is an easy way to update a database with the latest changes from source control. The PowerShell example below shows how to create the documentation from a database. That database might be your integration database or a shared development database that is always up to date with the latest changes. $serverName = "server\instance" $databaseName = "databaseName" # If you want to document multiple databases use a comma separated list $userName = "username" $password = "password" # Path to SQLDoc.exe $SQLDocPath = "C:\Program Files (x86)\Red Gate\SQL Doc 3\SQLDoc.exe" $arguments = @( "/server:$($serverName)", "/database:$($databaseName)", "/username:$($userName)", "/password:$($password)", "/filetype:html", "/outputfolder:.", # "/project:$args[0]", # If you already have a .sqldoc project file you can pass it as an argument to this script. Values in the project will be overridden with any options set on the command line "/name:$databaseName Report", "/copyrightauthor:$([Environment]::UserName)" ) write-host $arguments & $SQLDocPath $arguments There are several options you can set on the command line to vary how your documentation is created. For example, you can document multiple databases or exclude certain types of objects. In the example above, we set the name of the report to match the database name, and use the current Windows user as the documentation author. For more examples of how you can customise the report from the command line please see the SQL Doc command line documentation If you already have a .sqldoc project file, or wish to further customise the report by including or excluding specific objects, you can use this project on the command line. Any settings you specify on the command line will override the defaults in the project. For details of what you can customise in the project please see the SQL Doc project documentation. In the example above, the line to use a project is commented out, but you can uncomment this line and then pass a path to a .sqldoc project file as an argument to this script.  Conclusion Keeping documentation about your databases up to date is very easy to set up using SQL Doc and PowerShell. By using a CI server to run this process you can trigger the documentation to be run on every change to a source controlled database, and keep historic documentation available. If you are considering more advanced database automation, e.g. database unit testing, change script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have any questions.

    Read the article

  • Incorrect colour blending when using a pixel shader with XNA

    - by MazK
    I'm using XNA 4.0 to create a 2D game and while implementing a layer tinting pixel shader I noticed that when the texture's alpha value is anything between 1 or 0 the end result is different than expected. The tinting works from selecting a colour and setting the amount of tint. This is achieved via the shader which works out first the starting colour (for each r, g, b and a) : float red = texCoord.r * vertexColour.r; and then the final tinted colour : output.r = red + (tintColour.r - red) * tintAmount; The alpha value isn't tinted and is left as : output.a = texCoord.a * vertexColour.a; The picture in the link below shows different backdrops against an energy ball object where it's outer glow hasn't blended as I would like it to. The middle two are incorrect as the second non tinted one should not show a glow against a white BG and the third should be entirely invisible. The blending function is NonPremultiplied. Why the alpha value is interfering with the final colour?

    Read the article

  • SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff)

    - by jamiet
    Recently I noticed a tweet from notable SQL Server author and community dude-at-large Steve Jones in which he asked how many SQL Server developers were putting their SQL Server source code (i.e. DDL) under source control (I’m paraphrasing because I can’t remember the exact tweet and Twitter’s search functionality is useless). The question surprised me slightly as I thought a more pertinent question would be “how many SQL Server developers are not using source control?” because I have been doing just that for many years now and I simply assumed that use of source control is a given in this day and age. Then I started thinking about it. “Perhaps I’m wrong” I pondered, “perhaps the SQL Server folks that do use source control in their day-to-day jobs are in the minority”. So, dear reader, I’m interested to know a little bit more about your use of source control. Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? To encourage you to contribute I have five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect to give away to what I consider to be the five best replies (“best” is totally subjective of course but this is my blog so my decision is final ), if you want to be considered don’t forget to leave contact details; email address, Twitter handle or similar will do. To start you off and to perhaps get the brain cells whirring, here are my answers to the questions above: Are you putting your SQL Server code into a source control system? As I think I’ve already said…yes. Always. If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? I move around a lot between many clients so it changes on a fairly regular basis; my current client uses Team Foundation Server (aka TFS) and as part of a separate project is trialing the use of Team Foundation Service. I have used SVN extensively in the past which I am a fan of (I generally prefer it to TFS) and am trying to get my head around Git by using it for ObjectStorageHelper. What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? On my current project, Team Explorer. In the past I have used Tortoise to connect to SVN. Why did you make those particular software choices? I generally use whatever the client uses and given that I work with SQL Server I find that the majority of my clients use TFS, I guess simply because they are Microsoft development shops. Any interesting anecdotes to share in regard to your use of source control and SQL Server? Not an anecdote as such but I am going to share some frustrations about TFS. In many ways TFS is a great product because it integrates many separate functions (source control, work item tracking, build agents) into one whole and I’m firmly of the opinion that that is a good thing if for no reason other than being able to associate your check-ins with a work-item. However, like many people there are aspects to TFS source control that annoy me day-in, day-out. Chief among them has to be the fact that it uses a file’s read-only property to determine if a file should be checked-out or not and, if it determines that it should, it will happily do that check-out on your behalf without you even asking it to. I didn’t realise how ridiculous this was until I first used SVN about three years ago – with SVN you make any changes you wish and then use your source control client to determine which files have changed and thus be checked-in; the notion of “check-out” doesn’t even exist. That sounds like a small thing but you don’t realise how liberating it is until you actually start working that way. Hoping to hear some more anecdotes and opinions in the comments. Remember….free software is up for grabs! @jamiet 

    Read the article

  • Answers to “What source control system do you use?” (and some winners)

    - by jamiet
    About a month ago I posed a question here on my blog SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff) in which I asked SQL Server developers to answer the following questions: Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? I had some really great responses (I highly recommend going and reading them). I promised that the five best, most thought-provoking, responses (as determined by me) would win one of five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect; here are the five that I chose (note that if you responded but did not leave a means of getting in touch then you weren’t considered for one of the prizes – sorry): In general, I don't think the management overhead and licensing cost associated with TFS is worthwhile if all you're doing is using source control. To get value from TFS, at a minimum you need to be using team build, and possibly other stuff as well, such as the sharepoint integration. If that's all you need, then svn with Tortoise would be my first choice. If you want to add build automation later, you can do this with cruisecontrol (is it still called that?), JetBrains, etc. For a long time I thought that Redgate's claims about "bridging the SSMS-VS divide" were a load of hot air, since in my experience anyone who knew what they were doing was using Visual Studio, in particular SSDT and its predecessors. However, on a recent client I was putting in source control for the first time, and I discovered that the "divide" really does exist. That client has ended up using svn with Redgate SQL Source Control, with no build automation, but with scope to add it in the future. Gavin Campbell I think putting the DB under source control is a great idea.  I have issues with the earlier versions of SQL Source Control in that it provides little help in versioning the DB. I think the latest version merges SQL Compare and SQL Source Control together.  Which is how it should have been all along. Sure I have the DB scripts in SVN, but I can't automate DB builds and changes without more tools.  Frankly I'm surprised databases don't have some sort of versioning built into them. Nick Portelli Source control has been immensely useful and saved me from a lot of rework on more than one occasion.  I have learned that you have to be extremely careful checking in data.  Our system is internal only so during the system production run once a week, if there is a problem that I can fix easily(for example, a control table points to a file in the wrong environment), I'll do it directly in production so the run can continue as soon as possible since we have a specified time window.  We do full test runs to minimize this but it has come up once or twice.  We use Red-Gate source control to "push" from the test environment to the production environment.  There have been a couple of occasions where the test environment with the wrong setting was pushed back over the production environment because the change was made only in production.  Gotta keep an eye on that. Alan Dykes Goodness is it manual.  And can be extremely painful at times.  Not only are we running thin, we are constrained on the tools we can get ($$ must mean free).  Certainly no excuse, and a great opportunity to improve my skills by learning new things.  But...  Getting buy in a on a proven process or methodology is hard, takes time, and diverts us from development.  If SQL Source Control is easy to use and proven oh boy could you get some serious fans around here!  Seriously though, as the "accidental dba" of this shop any new ideas / easy to implement tools can make a world of difference in productivity and most importantly accuracy.  Manual = bad. :) John Hennesey (who left his email address) The one thing I would love to know more about is the unique challenges of working with databases as source code - you can store scripts, but are they written as deployment scripts with all the logic about how to apply them to an existing DB? Where is that baseline DB? Where's the data? How does a team share the data and the code? It's a real challenge. Merrill Aldrich Congratulations to the five of you. Red Gate will be in touch with you soon about your free licenses. Thank you to all those that responded. And again, go and check out all the responses – those above are only small proportion from what is a very interesting comment thread. @Jamiet

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • Moving around/avoiding obstacles

    - by János Harsányi
    I would like to write a "game", where you can place an obstacle (red), and then the black dot tries to avoid it, and get to the green target. I'm using a very easy way to avoid it, if the black dot is close to the red, it changes its direction, and moves for a while, then it moves forward to the green point. How could I create a "smooth" path for the computer controlled "player"? Edit: Not the smoothness is the main point, but to avoid the red blocking "wall" and not to crash into it and then avoid it. How could I implement some path finding algorithm if I just have basically 3 points? (And what would it make the things much more complicated, if you could place multiple obstacles?)

    Read the article

  • Wise settings for Git

    - by Marko Apfel
    These settings reflecting my Git-environment. It a result of reading and trying several ideas of input from others. Must-Haves Aliases [alias] ci = commit st = status co = checkout oneline = log --pretty=oneline br = branch la = log --pretty=\"format:%ad %h (%an): %s\" --date=short df = diff dc = diff --cached lg = log -p lol = log --graph --decorate --pretty=oneline --abbrev-commit lola = log --graph --decorate --pretty=oneline --abbrev-commit --all ls = ls-files ign = ls-files -o -i --exclude-standard Colors [color] ui = auto [color "branch"] current = yellow reverse local = yellow remote = green [color "diff"] meta = yellow bold frag = magenta bold old = red bold new = green bold whitespace = red reverse [color "status"] added = green changed = red untracked = cyan Core [core] autocrlf = true excludesfile = c:/Users/<user>/.gitignore editor = 'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession –noPlugin Nice to have Merge and Diff [merge] tool = kdiff3 [mergetool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [mergetool "p4merge"] path = c:/Program Files (x86)/Perforce Merge/p4merge.exe cmd = p4merge \"$BASE\" \"$LOCAL\" \"$REMOTE\" \"$MERGED\" keepTemporaries = false trustExitCode = false keepBackup = false [diff] guitool = kdiff3 [difftool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [difftool "p4merge"] path = C:/Users/<user>/My Applications/Perforce Merge/p4merge.exe cmd = \"p4merge.exe $LOCAL $REMOTE\" .

    Read the article

  • Smartassembly 5: it lives! Early Access builds now available

    - by Bart Read
    I'm pleased to announce that, late last week, we put out the first early access build for Smartassembly 5, Red Gate's fantastic code protection and error reporting tool, which we acquired last September. You can download it via: http://www.red-gate.com/messageboard/viewforum.php?f=116 It's obviously pretty early days, so please do not try to use this to protect a production application, but we've already done a lot of work in some key areas: We're simplifying and streamlining the licensing model (you won't see this yet, but a lot of the work on this has already been done). We've improved usability of the product, with a better menu, reordering of project settings, and better defaults. We've also fixed a load of bugs, which I'll let Alex blog about in more detail. On a slightly more trivial level, the curly braces are also no more. Over the coming weeks, we'll be adding more improvements, and starting usability tests. If you're interested in getting involved in the latter, please drop an email to [email protected].

    Read the article

  • Solving 2D Collision Detection Issues with Relative Velocities

    - by Jengerer
    Imagine you have a situation where two objects are moving parallel to one-another and are both within range to collide with a static wall, like this: A common method used in dynamic collision detection is to loop through all objects in arbitrary order, solve for pair-wise collision detection using relative velocities, and then move the object to the nearest collision, if any. However, in this case, if the red object is checked first against the blue one, it would see that the relative velocity to the blue object is -20 m/s (and would thereby not collide this time frame). Then it would see that the red object would collide with the static wall, and the solution would be: And the red object passes through the blue one. So it appears to be a matter of choosing the right order in which you check collisions; but how can you determine which order is correct? How can this passing through of objects be avoided? Is ignoring relative velocity and considering every object as static during pair-wise checks a better idea for this reason?

    Read the article

  • VMware Workstation executes nonexisting and outdated File

    - by RED SOFT ADAIR-StefanWoe
    I execute a command line program from a VM (VMware 7.1.1) with Windows XP. The executable file is located on the host machine. If i start a command line in the VM, using a drive mounted as .host\SharedFolders i see the following: D:\projects\myProgram\WinRel>dir myProgram.exe 02.09.2010 21:15 245.760 myProgram.exe D:\projects\myProgram\WinRel>myProgram.exe Processing BuildFeb 26 2009 This is wrong! The whole execution of the program behaves like the version that is outdated more than one year! I triple checked that there is no confusion or anything If i start the Program on the host or if i even start it from the VM using a UNC Path, it shows the last build date and executes as expected: C:\>dir \\myMachine\drive_d\projects\myProgram\WinRel\myProgram.exe 02.09.2010 21:15 245.760 myProgram.exe C:\>\\myMachine\drive_d\projects\myProgram\WinRel\myProgram.exe Processing Build: Sep 2 2010 Can this behavior somehow be explained? There MUST be a cache for the host mounted drive. The program it executes does not exist anymore! If i remove it from the host, the VM can not execute it anymore. If i restore it, the behavior becomes the same again.

    Read the article

  • django, mod_wsgi, MySQL High CPU - Problems

    - by Red Rover
    Good Evening, and thank you for reading this post. I am having a problem with Django after migrating the dB from SQLlite to MySQL. Initially, for the first 48hours, all ran well. But now we are experiencing high cpu about every 30 minutes. This is a production ESX4i VM host, with 2 x 2.8 ghz CPUs and 12 GB ram. I have allocated 4 cpu's to this VM and 4 GB memory. Any insight into this configuration and help with the spikes in CPU would be appreciated. IT is configured to use the prefork MPM. Outlined are the config's for the different services: MySQL Server version: 5.1.61 Source distribution Django 1.3 mod_wsgi Apache/2.2.15 httpd.conf Timeout 120 KeepAlive Off MaxKeepAliveRequests 400 KeepAliveTimeout 3 prefork MPM StartServers 8 MinSpareServers 8 MaxSpareServers 16 ServerLimit 40 MaxClients 40 MaxRequestsPerChild 0 worker MPM StartServers 16 MaxClients 1024 MinSpareThreads 64 MaxSpareThreads 256 ThreadsPerChild 64 MaxRequestsPerChild 10240 MySQL my.conf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid my.cnf wsgi.conf LoadModule wsgi_module modules/mod_wsgi.so /etc/httpd/conf.d/wsgi.conf WSGISocketPrefix /var/run/wsgi WSGIPythonEggs /var/tmp WSGIDaemonProcess SITE maximum-requests=10000 WSGIProcessGroup SITE

    Read the article

  • django, mod_wsgi, MySQL High CPU - Problems

    - by Red Rover
    I am having a problem with an OSQA site. It is Django/Apache/mod_wsgi configured site. Every hour, the CPU spikes to 164% (Average) for task HTTPD. After 10 minutes, it frees back up. I have reviewed the logs, cron tables, made many config changes, but cannot track this problem down. Can someone please look at the information below and let me know if it is a configuration problem, or if anyone else has experienced this issue. Running TOP shows HTTPD using 165% of CPU VMware performance monitor also displays spikes. This happens every hour for 10 minutes. I have the following information from server status Server Version: Apache/2.2.15 (Unix) DAV/2 mod_wsgi/3.2 Python/2.6.6 Server Built: Feb 7 2012 09:50:15 Current Time: Sunday, 10-Jun-2012 21:44:29 EDT Restart Time: Sunday, 10-Jun-2012 19:44:51 EDT Parent Server Generation: 0 Server uptime: 1 hour 59 minutes 37 seconds Total accesses: 1088 - Total Traffic: 11.5 MB CPU Usage: u80.26 s243.8 cu0 cs0 - 4.52% CPU load .152 requests/sec - 1682 B/second - 10.8 kB/request 4 requests currently being processed, 11 idle workers ....._..........__......W....................................... ...................................C._..._....._L__._L_._....... ...................... Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request 0-0 - 0/0/34 . 0.42 327 17 0.0 0.00 0.67 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 1-0 - 0/0/22 . 0.31 339 32 0.0 0.00 0.26 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 2-0 - 0/0/22 . 0.65 358 10 0.0 0.00 0.31 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 3-0 - 0/0/31 . 1.03 378 31 0.0 0.00 0.60 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 4-0 - 0/0/20 . 0.45 356 9 0.0 0.00 0.31 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 5-0 18852 0/16/34 _ 0.98 27 18120 0.0 0.37 0.62 69.180.250.36 osqa.informs.org GET /questions/289/what-is-the-difference-between-operations-re 6-0 - 0/0/32 . 0.94 309 29 0.0 0.00 0.64 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 7-0 - 0/0/31 . 1.15 382 32 0.0 0.00 0.75 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 8-0 - 0/0/21 . 0.28 403 19 0.0 0.00 0.20 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 9-0 - 0/0/32 . 1.37 288 16 0.0 0.00 0.60 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 10-0 - 0/0/33 . 1.72 383 16 0.0 0.00 0.40 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 I am running Django 1.3 This is a mod_wsgi configuration and copied is the wsgi.conf file: <IfModule !python_module> <IfModule !wsgi_module> LoadModule wsgi_module modules/mod_wsgi.so <IfModule wsgi_module> <Directory /var/www/osqa> Order allow,deny Allow from all #Deny from all </Directory> WSGISocketPrefix /var/run/wsgi WSGIPythonEggs /var/tmp WSGIDaemonProcess OSQA maximum-requests=10000 WSGIProcessGroup OSQA Alias /admin_media/ /usr/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/contrib/admin/media/ Alias /m/ /var/www/osqa/forum/skins/ Alias /upfiles/ /var/www/osqa/forum/upfiles/ <Directory /var/www/osqa/forum/skins> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /var/www/osqa/osqa.wsgi </IfModule> </IfModule> </IfModule> This is the httpd.conf file Timeout 120 KeepAlive Off MaxKeepAliveRequests 100 MaxKeepAliveRequests 400 KeepAliveTimeout 3 <IfModule prefork.c> Startservers 15 MinSpareServers 10 MaxSpareServers 20 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 0 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> We are using MySQL The server is an ESX4i, configured for the VM to use 4 CPUs and 8 GB Ram. Hyper threading is enabled, 2 physical CPU's, with 4 Logical. the CPU are Intel Xeon 2.8 GHz. Total memory is 12GB

    Read the article

  • how to resolve all externally unresolved DNS queries ?

    - by red eyes dev
    I am using PowerDns on a Linux box (Debian 6). I would like to set up the powerdns server to resolve all externally unresolved DNS queries to a given, internal host. Is this possible? How is it done? I think it's necessary to use pdns-recursor, but my configuration file doesn't works ! I use mysql for backend. I add manually google.com and it's works, but if I delete entry I have "server failed", root dns (or isp dns) don't answer me.

    Read the article

  • Does Basic User Authentication require 2-Phase communiation?

    - by RED SOFT ADAIR-StefanWoe
    My Application connects to the Internet to HTTP Services using boost::asio. Recently we added support for HTTP Proxys and Basic User Authentication. We implemented Basic User Authentication by just sending Authentication parameters with every HTTP call if a user configured a proxy in our program. Parameters are sent as described here: Authorization: Basic <base64 Encoded username:password> This works at least for one user and his proxy server. Other users report that their Proxy server replys with 407 Proxy Authentication Required My guess is that some proxy servers accept 1 one phase authentication and that others don't. I do not find any information that a 2 Phase communication is requested where the access always is denied for the first call by returning 407 and that only a second call is accepted. Our program yet does not retry the call if a 407 has been returned. Do we have to add this? I asked this question before on stackoverflow but did not get a sufficient answer.

    Read the article

  • What are the default/recommendet access rights for %ALLUSERSPROFILE%?

    - by RED SOFT ADAIR-StefanWoe
    We have a Windows application that reads and writes some data for all users. We place it at %ALLUSERSPROFILE%\OurProgram*.* We now encounter a few cases in larger companies, where users do not have write permission to %ALLUSERSPROFILE%. Most of these cases are running Windows 7. The problem does not occur on a normal desktop installation of Windows 7 though. What is the recommended policy for this location? I have not found any "official" information about this. Is there a different location where all users have write permission?

    Read the article

  • Using Deployment Manager

    - by Jess Nickson
    One of the teams at Red Gate has been working very hard on a new product: Deployment Manager. Deployment Manager is a free tool that lets you deploy updates to .NET apps, services and databases through a central dashboard. Deployment Manager has been out for a while, but I must admit that even though I work in the same building, until now I hadn’t even looked at it. My job at Red Gate is to develop and maintain some of our community sites, which involves carrying out regular deployments. One of the projects I have to deploy on a fairly regular basis requires me to send my changes to our build server, TeamCity. The output is a Zip file of the build. I then have to go and find this file, copy it across to the staging machine, extract it, and copy some of the sub-folders to other places. In order to keep track of what builds are running, I need to rename the folders accordingly. However, even after all that, I still need to go and update the site and its applications in IIS to point at these new builds. Oh, and then, I have to repeat the process when I deploy on production. Did I mention the multiple configuration files that then need updating as well? Manually? The whole process can take well over half an hour. I’m ready to try out a new process. Deployment Manager is designed to massively simplify the deployment processes from what could be lots of manual copying of files, managing of configuration files, and database upgrades down to a few clicks. It’s a big promise, but I decided to try out this new tool on one of the smaller ASP.NET sites at Red Gate, Format SQL (the result of a Red Gate Down Tools week). I wanted to add some new functionality, but given it was a new site with no set way of doing things, I was reluctant to have to manually copy files around servers. I decided to use this opportunity as a chance to set the site up on Deployment Manager and check out its functionality. What follows is a guide on how to get set up with Deployment Manager, a brief overview of its features, and what I thought of the experience. To follow along with the instructions that follow, you’ll first need to download Deployment Manager from Red Gate. It has a free ‘Starter Edition’ which allows you to create up to 5 projects and agents (machines you deploy to), so it’s really easy to get up and running with a fully-featured version. The Initial Set Up After installing the product and setting it up using the administration tool it provides, I launched Deployment Manager by going to the URL and port I had set it to run on. This loads up the main dashboard. The dashboard does a good job of guiding me through the process of getting started, beginning with a prompt to create some environments. 1. Setting up Environments The dashboard informed me that I needed to add new ‘Environments’, which are essentially ways of grouping the machines you want to deploy to. The environments that get added will show up on the main dashboard. I set up two such environments for this project: ‘staging’ and ‘live’.   2. Add Target Machines Once I had created the environments, I was ready to add ‘target machine’s to them, which are the actual machines that the deployment will occur on.   To enable me to deploy to a new machine, I needed to download and install an Agent on it. The ‘Add target machine’ form on the ‘Environments’ page helpfully provides a link for downloading an Agent.   Once the agent has been installed, it is just a case of copying the server key to the agent, and the agent key to the server, to link them up.   3. Run Health Check If, after adding your new target machine, the ‘Status’ flags an error, it is possible that the Agent and Server keys have not been entered correctly on both Deployment Manager and the Agent service.     You can ‘Check Health’, which will give you more information on any issues. It is probably worth running this regardless of what status the ‘Environments’ dashboard is claiming, just to be on the safe side.     4. Add Projects Going back to the main Dashboard tab at this point, I found that it was telling me that I needed to set up a new project.   I clicked the ‘project’ link to get started, gave my new project a name and clicked ‘Create’. I was then redirected to the ‘Steps’ page for the project under the Projects tab.   5. Package Steps The ‘Steps’ page was fairly empty when it first loaded.   Adding a ‘step’ allowed me to specify what packages I wanted to grab for the deployment. This part requires a NuGet package feed to be set up, which is where Deployment Manager will look for the packages. At Red Gate, we already have one set up, so I just needed to tell Deployment Manager about it. Don’t worry; there is a nice guide included on how to go about doing all of this on the ‘Package Feeds’ page in ‘Settings’, if you need any help with setting these bits up.    At Red Gate we use a build server, TeamCity, which is capable of publishing built projects to the NuGet feed we use. This makes the workflow for Format SQL relatively simple: when I commit a change to the project, the build server is configured to grab those changes, build the project, and spit out a new NuGet package to the Red Gate NuGet package feed. My ‘package step’, therefore, is set up to look for this package on our feed. The final part of package step was simply specifying which machines from what environments I wanted to be able to deploy the project to.     Format SQL Now the main Dashboard showed my new project and environment in a rather empty looking grid. Clicking on my project presented me with a nice little message telling me that I am now ready to create my first release!   Create a release Next I clicked on the ‘Create release’ button in the Projects tab. If your feeds and package step(s) were set up correctly, then Deployment Manager will automatically grab the latest version of the NuGet package that you want to deploy. As you can see here, it was able to pick up the latest build for Format SQL and all I needed to do was enter a version number and description of the release.   As you can see underneath ‘Version number’, it keeps track of what version the previous release was given. Clicking ‘Create’ created the release and redirected me to a summary of it where I could check the details before deploying.   I clicked ‘Deploy this release’ and chose the environment I wanted to deploy to and…that’s it. Deployment Manager went off and deployed it for me.   Once I clicked ‘Deploy release’, Deployment Manager started to automatically update and provide continuing feedback about the process. If any errors do arise, then I can expand the results to see where it went wrong. That’s it, I’m done! Keep in mind, if you hit errors with the deployment itself then it is possible to view the log output to try and determine where these occurred. You can keep expanding the logs to narrow down the problem. The screenshot below is not from my Format SQL deployment, but I thought I’d post one to demonstrate the logging output available. Features One of the best bits of Deployment Manager for me is the ability to very, very easily deploy the same release to multiple machines. Deploying this same release to production was just a case of selecting the deployment and choosing the ‘live’ environment as the place to deploy to. Following on from this is the fact that, as Deployment Manager keeps track of all of your releases, it is extremely easy to roll back to a previous release if anything goes pear-shaped! You can view all your previous releases and select one to re-deploy. I needed this feature more than once when differences in my production and staging machines lead to some odd behavior.     Another option is to use the TeamCity integration available. This enables you to set Deployment Manager up so that it will automatically create releases and deploy these to an environment directly from TeamCity, meaning that you can always see the latest version up and running without having to do anything. Machine Specific Deployments ‘What about custom configuration files?’ I hear you shout. Certainly, it was one of my concerns. Our setup on the staging machine is not in line with that on production. What this means is that, should we deploy the same configuration to both, one of them is going to break. Thankfully, it turns out that Deployment Manager can deal with this. Given I had environments ‘staging’ and ‘live’, and that staging used the project’s web.config file, while production (‘live’) required the config file to undergo some transformations, I simply added a web.live.config file in the project, so that it would be included as part of the NuGet package. In this file, I wrote the XML document transformations I needed and Deployment Manager took care of the rest. Another option is to set up ‘variables’ for your project, which allow you to specify key-value pairs for your configuration file, and which environment to apply them to. You’ll find Variables as a full left-hand submenu within the ‘Projects’ tab. These features will definitely be of interest if you have a large number of environments! There are still many other features that I didn’t get a chance to play around with like running PowerShell scripts for more personalised deployments. Maybe next time! Also, let’s not forget that my use case in this article is a very simple one – deploying a single package. I don’t believe that all projects will be equally as simple, but I already appreciate how much easier Deployment Manager could make my life. I look forward to the possibility of moving our other sites over to Deployment Manager in the near future.   Conclusion In this article I have described the steps involved in setting up and configuring an instance of Deployment Manager, creating a new automated deployment process, and using this to actually carry out a deployment. I’ve tried to mention some of the features I found particularly useful, such as error logging, easy release management allowing you to deploy the same release multiple times, and configuration file transformations. If I had to point out one issue, then it would be that the releases are immutable, which from a development point of view makes sense. However, this causes confusion where I have to create a new release to deploy to a newly set up environment – I cannot simply deploy an old release onto a new environment, the whole release needs to be recreated. I really liked how easy it was to get going with the product. Setting up Format SQL and making a first deployment took very little time. Especially when you compare it to how long it takes me to manually deploy the other site, as I described earlier. I liked how it let me know what I needed to do next, with little messages flagging up that I needed to ‘create environments’ or ‘add some deployment steps’ before I could continue. I found the dashboard incredibly convenient. As the number of projects and environments increase, it might become awkward to try and search them and find out what state they are in. Instead, the dashboard handily keeps track of the latest deployments of each project and lets you know what version is running on each of the environments, and when that deployment occurred. Finally, do you remember my complaint about having to rename folders so that I could keep track of what build they came from? This is yet another thing that Deployment Manager takes care of for you. Each release is put into its own directory, which takes the name of whatever version number that release has, though these can be customised if necessary. If you’d like to take a look at Deployment Manager for yourself, then you can download it here.

    Read the article

  • Exceptional DBA 2011 Jeff Moden on why you should enter in 2012

    - by RedAndTheCommunity
    My "reign" as the Red Gate Exceptional DBA is almost over and I was asked to say a few words about this wonderful award. Having been one of those folks that shied away from entering the contest during the first 3 years of the award, I thought I'd spend the time encouraging DBAs of all types to enter. Winning this award has some obvious benefits. You win a trip to PASS including money towards your flight, paid hotel stay, and, of course, paid admission. You win a wonderful bundle of software from Red Gate to make your job as a DBA a whole lot easier. You also win some pretty incredible notoriety for your resume. After all, it's not everyone who wins a worldwide contest. To date, there are only 4 of us in the world who have won this award. You could be number 5! For me, all of that pales in comparison to what I found out during the entry process. I'm very confident in my skills, but I'm also humble. It was suggested to me that I enter the contest when it first started. I just couldn't bring myself to nominate myself. When the 2011 nomination period opened up, several people again suggested that I enter, so I swallowed hard and asked several co-workers to have a look at the online nomination form and, if they thought me worthy, to write a nomination for me. I won't bore you with the details, but what they wrote about me was one of the most incredible rewards that I could ever have hoped to receive. I had no idea of the impact that I'd made on my co-workers. Even if I hadn't made it to the top 5 for the award, I had already won something very near and dear that no one can ever top. "Even if I hadn't made it to the top 5 for the award, I had already won something very near and dear that no one can ever top." There's only one named winner and 4 "runners up" in this competition every year but don't let that discourage you. Enter this competition. Even if you work in the proverbial "Mom'n'Pop" shop, get your boss and the people you work with directly to nominate you. Even if you don't make it to the top 5, you might just find out that you're more of a winner than you think. If you're too proud to ask them, then take the time to nominate yourself instead of shying away like I did for the first 3 years. You work hard as a DBA and, as David Poole once said, if you're the first person that people ask for help rather than one of the last, then you're probably an Exceptional DBA. It's time to stand up and be counted! Win or lose, the entry process can be a huge reward in itself. It was for me. Thank you, Red Gate, for giving me such a wonderful opportunity. Thanks for listening folks and for all that you do as DBAs. As 'Red Green' says, "We're all in this together and I'm pullin' for ya". --Jeff Moden Red Gate Exceptional DBA 2011

    Read the article

  • Exceptional DBA 2011 Jeff Moden on why you should enter in 2012

    - by RedAndTheCommunity
    My "reign" as the Red Gate Exceptional DBA is almost over and I was asked to say a few words about this wonderful award. Having been one of those folks that shied away from entering the contest during the first 3 years of the award, I thought I'd spend the time encouraging DBAs of all types to enter. Winning this award has some obvious benefits. You win a trip to PASS including money towards your flight, paid hotel stay, and, of course, paid admission. You win a wonderful bundle of software from Red Gate to make your job as a DBA a whole lot easier. You also win some pretty incredible notoriety for your resume. After all, it's not everyone who wins a worldwide contest. To date, there are only 4 of us in the world who have won this award. You could be number 5! For me, all of that pales in comparison to what I found out during the entry process. I'm very confident in my skills, but I'm also humble. It was suggested to me that I enter the contest when it first started. I just couldn't bring myself to nominate myself. When the 2011 nomination period opened up, several people again suggested that I enter, so I swallowed hard and asked several co-workers to have a look at the online nomination form and, if they thought me worthy, to write a nomination for me. I won't bore you with the details, but what they wrote about me was one of the most incredible rewards that I could ever have hoped to receive. I had no idea of the impact that I'd made on my co-workers. Even if I hadn't made it to the top 5 for the award, I had already won something very near and dear that no one can ever top. "Even if I hadn't made it to the top 5 for the award, I had already won something very near and dear that no one can ever top." There's only one named winner and 4 "runners up" in this competition every year but don't let that discourage you. Enter this competition. Even if you work in the proverbial "Mom'n'Pop" shop, get your boss and the people you work with directly to nominate you. Even if you don't make it to the top 5, you might just find out that you're more of a winner than you think. If you're too proud to ask them, then take the time to nominate yourself instead of shying away like I did for the first 3 years. You work hard as a DBA and, as David Poole once said, if you're the first person that people ask for help rather than one of the last, then you're probably an Exceptional DBA. It's time to stand up and be counted! Win or lose, the entry process can be a huge reward in itself. It was for me. Thank you, Red Gate, for giving me such a wonderful opportunity. Thanks for listening folks and for all that you do as DBAs. As 'Red Green' says, "We're all in this together and I'm pullin' for ya". --Jeff Moden Red Gate Exceptional DBA 2011

    Read the article

  • Generate texture for a heightmap

    - by James
    I've recently been trying to blend multiple textures based on the height at different points in a heightmap. However i've been getting poor results. I decided to backtrack and just attempt to recreate one single texture from an SDL_Surface (i'm using SDL) and just send that into opengl. I'll put my code for creating the texture and reading the colour values. It is a 24bit TGA i'm loading, and i've confirmed that the rest of my code works because i was able to send the surfaces pixels directly to my createTextureFromData function and it drew fine. struct RGBColour { RGBColour() : r(0), g(0), b(0) {} RGBColour(unsigned char red, unsigned char green, unsigned char blue) : r(red), g(green), b(blue) {} unsigned char r; unsigned char g; unsigned char b; }; // main loading code SDLSurfaceReader* reader = new SDLSurfaceReader(m_renderer); reader->readSurface("images/grass.tga"); // new texture unsigned char* newTexture = new unsigned char[reader->m_surface->w * reader->m_surface->h * 3 * reader->m_surface->w]; for (int y = 0; y < reader->m_surface->h; y++) { for (int x = 0; x < reader->m_surface->w; x += 3) { int index = (y * reader->m_surface->w) + x; RGBColour colour = reader->getColourAt(x, y); newTexture[index] = colour.r; newTexture[index + 1] = colour.g; newTexture[index + 2] = colour.b; } } unsigned int id = m_renderer->createTextureFromData(newTexture, reader->m_surface->w, reader->m_surface->h, RGB); // functions for reading pixels RGBColour SDLSurfaceReader::getColourAt(int x, int y) { Uint32 pixel; Uint8 red, green, blue; RGBColour rgb; pixel = getPixel(m_surface, x, y); SDL_LockSurface(m_surface); SDL_GetRGB(pixel, m_surface->format, &red, &green, &blue); SDL_UnlockSurface(m_surface); rgb.r = red; rgb.b = blue; rgb.g = green; return rgb; } // this function taken from SDL documentation // http://www.libsdl.org/cgi/docwiki.cgi/Introduction_to_SDL_Video#getpixel Uint32 SDLSurfaceReader::getPixel(SDL_Surface* surface, int x, int y) { int bpp = m_surface->format->BytesPerPixel; Uint8 *p = (Uint8*)m_surface->pixels + y * m_surface->pitch + x * bpp; switch (bpp) { case 1: return *p; case 2: return *(Uint16*)p; case 3: if (SDL_BYTEORDER == SDL_BIG_ENDIAN) return p[0] << 16 | p[1] << 8 | p[2]; else return p[0] | p[1] << 8 | p[2] << 16; case 4: return *(Uint32*)p; default: return 0; } } I've been stumped at this, and I need help badly! Thanks so much for any advice.

    Read the article

  • C# Linq List Contains Similar Elements

    - by John Peters
    Hi All, I am looking for linq query to see if there exists a similar object I have an object graph as follows Cart myCart = new Cart { List<CartProduct> myCartProduct = new List<CartProduct> { CartProduct cartProduct1 = new CartProduct { List<CartProductAttribute> a = new List<CartProductAttribute> { CartProductAttribute cpa1 = new CartProductAttribute{ title="red" }, CartProductAttribute cpa2 = new CartProductAttribute{ title="small" } } } CartProduct cartProduct2 = new CartProduct { List<CartProductAttribute> d = new List<CartProductAttribute> { CartProductAttribute cpa3 = new CartProductAttribute{ title="john" }, CartProductAttribute cpa4 = new CartProductAttribute{ title="mary" } } } } } I would like to get from the Cart = a CartProduct that has the exact same CartProductAttribute title values as a CartProduct that I need to compare. No more and no less. E.G. I need to find a similar CartProduct that has a CartProductAttribute with title="red" and a cartProductAttribute with title="small" in myCart (eg 'cartProduct1' in the example) CartProduct cartProductToCompare = new CartProduct { List<CartProductAttribute> cartProductToCompareAttributes = new List<CartProductAttribute> { CartProductAttribute cpa5 = new CartProductAttribute{ title="red" }, CartProductAttribute cpa6 = new CartProductAttribute{ title="small" } } } So from object graph myCart cartProduct1 cpa1 (title=red) cpa2 (title=small) cartProduct2 cpa3 (title=john) cpa4 (title=mary) Linq query looking for cartProductToCompare cpa5 (title=red) cpa6 (title=small) Should find cartProduct1 Hope all this makes sense... Thanks

    Read the article

  • Create array of objects based on another array?

    - by xckpd7
    I want to take an array like this: var food = [ { name: 'strawberry', type: 'fruit', color: 'red', id: 3483 }, { name: 'apple', type: 'fruit', color: 'red', id: 3418 }, { name: 'banana', type: 'fruit', color: 'yellow', id: 3458 }, { name: 'brocolli', type: 'vegetable', color: 'green', id: 1458 }, { name: 'steak', type: 'meat', color: 'brown', id: 2458 }, ] And I want to create something like this dynamically: var foodCategories = [ { name: 'fruit', items: [ { name: 'apple', type: 'fruit', color: 'red', id: 3418 }, { name: 'banana', type: 'fruit', color: 'yellow', id: 3458 } ] }, { name: 'vegetable', items: [ { name: 'brocolli', type: 'vegetable', color: 'green', id: 1458 }, ] }, { name: 'meat', items: [ { name: 'steak', type: 'meat', color: 'brown', id: 2458 } ] } ] What's the best way to go about doing this?

    Read the article

  • How to convert between Enums where values share the same names ?

    - by Ross Watson
    Hi, if I want to convert between two Enum types, the values of which, I hope, have the same names, is there a neat way, or do I have to do it like this...? enum colours_a { red, blue, green } enum colours_b { yellow, red, blue, green } static void Main(string[] args) { colours_a a = colours_a.red; colours_b b; //b = a; b = (colours_b)Enum.Parse(typeof(colours_b), a.ToString()); } Thanks, Ross

    Read the article

  • SQL Joing on a one-to-many relationship

    - by Harley
    Ok, here was my original question; Table one contains ID|Name 1 Mary 2 John Table two contains ID|Color 1 Red 2 Blue 2 Green 2 Black I want to end up with is ID|Name|Red|Blue|Green|Black 1 Mary Y Y 2 John Y Y Y It seems that because there are 11 unique values for color and 1000's upon 1000's of records in table one that there is no 'good' way to do this. So, two other questions. Is there an efficient way to get this result? I can then create a crosstab in my application to get the desired result. ID|Name|Color 1 Mary Red 1 Mary Blue 2 John Blue 2 John Green 2 John Black If I wanted to limit the number of records returned how could I do something like this? Where ((color='blue') AND (color<>'red' OR color<>'green')) So using the above example I would then get back ID|Name|Color 1 Mary Blue 2 John Blue 2 John Black I connect to Visual FoxPro tables via ADODB. Thanks!

    Read the article

  • Trim "Minify" inline css at runtime, expand it at edit time.

    - by Scott B
    My custom WP theme has a text block in the theme options panel that allows the user to create and maintain a custom css block that is applied to the site template at runtime. I would like to trim or "minify" this content before its stored in the database, but retain all the whitespace when its presented back to the user for editing. Would this be possible? For example, if the user has entered the following as their custom css code... .red {color:red;} .green {color:green;} .blue {color:blue;} Then I would like to store it in the database as: .red{color:red;}.green{color:green;}.blue{color:blue;} But still display it as it was input (ie, retain all the white space and line breaks) when the user is editing the content via my theme options panel.

    Read the article

  • Speed of CSS

    - by Ólafur Waage
    This is just a question to help me understand CSS rendering better. Lets say we have a million lines of this. <div class="first"> <div class="second"> <span class="third">Hello World</span> </div> </div> Which would be the fastest way to change the font of Hello World to red? .third { color: red; } div.third { color: red; } div.second div.third { color: red; } div.first div.second div.third { color: red; } Also, what if there was at tag in the middle that had a unique id of "foo". Which one of the CSS methods above would be the fastest. I know why these methods are used etc, im just trying to grasp better the rendering technique of the browsers and i have no idea how to make a test that times it. UPDATE: Nice answer Gumbo. From the looks of it then it would be quicker in a regular site to do the full definition of a tag. Since it finds the parents and narrows the search for every parent found. That could be bad in the sense you'd have a pretty large CSS file though.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >