Search Results

Search found 1093 results on 44 pages for 'don kirkby'.

Page 35/44 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • SQL Authority News – Play by Play with Pinal Dave – A Birthday Gift

    - by Pinal Dave
    Today is my birthday. Personal Note When I was young, I was always looking forward to my birthday as on this day, I used to get gifts from everybody. Now when I am getting old on each of my birthday, I have almost same feeling but the direction is different. Now on each of my birthday, I feel like giving gifts to everybody. I have received lots of support, love and respect from everybody; and now I must return it back.Well, on this birthday, I have very unique gifts for everybody – my latest course on SQL Server. How I Tune Performance I often get questions where I am asked how do I work on a normal day. I am often asked that how do I work when I have performance tuning project is assigned to me. Lots of people have expressed their desire that they want me to explain and demonstrate my own method of solving performance problem when I am facing real world problem. It is a pretty difficult task as in the real world, nothing goes as planned and usually planned demonstrations have no place there. The real world, demands real solutions and in a timely fashion. If a consultant goes to industry and does not demonstrate his/her capabilities in very first few minutes, it does not matter how much fame he/she is, the door is shown to them eventually. It is true and in my early career, I have faced it quite commonly. I have learned the trick to be honest from the start and request absolutely transparent communication from the organization where I am to consult. Play by Play Play by Play is a very unique setup. It is not planned and it is a step by step course. It is like a reality show – a very real encounter to the problem and real problem solving approach. I had a great time doing this course. Geoffrey Grosenbach (VP of Pluralsight) sits down with me to see what a SQL Server Admin does in the real world. This Play-by-Play focuses on SQL Server performance tuning and I go over optimizing queries and fine-tuning the server. The table of content of this course is very simple. Introduction In the introduction I explained my basic strategies when I am approached by a customer for performance tuning. Basic Information Gathering In this module I explain how I do gather various information for performance tuning project. It is very crucial to demonstrate to customers for consultant his capability of solving problem. I attempt to resolve a small problem which gives a big positive impact on performance, consultant have to gather proper information from the start. I demonstrate in this module, how one can collect all the important performance tuning metrics. Removing Performance Bottleneck In this module, I build upon the previous module’s statistics collected. I analysis various performance tuning measures and immediately start implementing various tweaks on the performance, which will start improving the performance of my server. This is a very effective method and it gives immediate return of efforts. Index Optimization Indexes are considered as a silver bullet for performance tuning. However, it is not true always there are plenty of examples where indexes even performs worst after implemented. The key is to understand a few of the basic properties of the index and implement the right things at the right time. In this module, I describe in detail how to do index optimizations and what are right and wrong with Index. If you are a DBA or developer, and if your application is running slow – this is must attend module for you. I have some really interesting stories to tell as well. Optimize Query with Rewrite Every problem has more than one solution, in this module we will see another very famous, but hard to master skills for performance tuning – Query Rewrite. There are few do’s and don’ts for any query rewrites. I take a very simple example and demonstrate how query rewrite can improve the performance of the query at many folds. I also share some real world funny stories in this module. This course is hosted at Pluralsight. You will need a valid login for Pluralsight to watch  Play by Play: Pinal Dave course. You can also sign up for FREE Trial of Pluralsight to watch this course. As today is my birthday – I will give 10 people (randomly) who will express their desire to learn this course, a free code. Please leave your comment and I will send you free code to watch this course for free. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Video

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • SQL SERVER – Powershell – Importing CSV File Into Database – Video

    - by pinaldave
    Laerte Junior is my very dear friend and Powershell Expert. On my request he has agreed to share Powershell knowledge with us. Laerte Junior is a SQL Server MVP and, through his technology blog and simple-talk articles, an active member of the Microsoft community in Brasil. He is a skilled Principal Database Architect, Developer, and Administrator, specializing in SQL Server and Powershell Programming with over 8 years of hands-on experience. He holds a degree in Computer Science, has been awarded a number of certifications (including MCDBA), and is an expert in SQL Server 2000 / SQL Server 2005 / SQL Server 2008 technologies. Let us read the blog post in his own words. I was reading an excellent post from my great friend Pinal about loading data from CSV files, SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video,   to SQL Server and was honored to write another guest post on SQL Authority about the magic of the PowerShell. The biggest stuff in TechEd NA this year was PowerShell. Fellows, if you still don’t know about it, it is better to run. Remember that The Core Servers to SQL Server are the future and consequently the Shell. You don’t want to be out of this, right? Let’s see some PowerShell Magic now. To start our tour, first we need to download these two functions from Powershell and SQL Server Master Jedi Chad Miller.Out-DataTable and Write-DataTable. Save it in a module and add it in your profile. In my case, the module is called functions.psm1. To have some data to play, I created 10 csv files with the same content. I just put the SQL Server Errorlog into a csv file and created 10 copies of it. #Just create a CSV with data to Import. Using SQLErrorLog [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) $ServerInstance=new-object (“Microsoft.SqlServer.Management.Smo.Server“) $Env:Computername $ServerInstance.ReadErrorLog() | export-csv-path“c:\SQLAuthority\ErrorLog.csv”-NoTypeInformation for($Count=1;$Count-le 10;$count++)  {       Copy-Item“c:\SQLAuthority\Errorlog.csv”“c:\SQLAuthority\ErrorLog$($count).csv” } Now in my path c:\sqlauthority, I have 10 csv files : Now it is time to create a table. In my case, the SQL Server is called R2D2 and the Database is SQLServerRepository and the table is CSV_SQLAuthority. CREATE TABLE [dbo].[CSV_SQLAuthority]( [LogDate] [datetime] NULL, [Processinfo] [varchar](20) NULL, [Text] [varchar](MAX) NULL ) Let’s play a little bit. I want to import synchronously all csv files from the path to the table: #Importing synchronously $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable-ServerInstanceR2D2-DatabaseSQLServerRepository-TableNameCSV_SQLAuthority-Data$DataTable Very cool, right? Let’s do it asynchronously and in background using PowerShell  Jobs: #If you want to do it to all asynchronously Start-job-Name‘ImportingAsynchronously‘ ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock {    ` $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable   -ServerInstance“R2D2″`                   -Database“SQLServerRepository“`                   -TableName“CSV_SQLAuthority“`                   -Data$DataTable             } Oh, but if I have csv files that are large in size and I want to import each one asynchronously. In this case, this is what should be done: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } How cool is that? Let’s make the funny stuff now. Let’s schedule it on an SQL Server Agent Job. If you are using SQL Server 2012, you can use the PowerShell Job Step. Otherwise you need to use a CMDexec job step calling PowerShell.exe. We will use the second option. First, create a ps1 file called ImportCSV.ps1 with the script above and save it in a path. In my case, it is in c:\temp\automation. Just add the line at the end: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } Get-Job | Wait-Job | Out-Null Remove-Job -State Completed Why? See my post Dooh PowerShell Trick–Running Scripts That has Posh Jobs on a SQL Agent Job Remember, this trick is for  ALL scripts that will use PowerShell Jobs and any kind of schedule tool (SQL Server agent, Windows Schedule) Create a Job Called ImportCSV and a step called Step_ImportCSV and choose CMDexec. Then you just need to schedule or run it. I did a short video (with matching good background music) and you can see it at: That’s it guys. C’mon, join me in the #PowerShellLifeStyle. You will love it. If you want to check what we can do with PowerShell and SQL Server, don’t miss Laerte Junior LiveMeeting on July 18. You can have more information in : LiveMeeting VC PowerShell PASS–Troubleshooting SQL Server With PowerShell–English Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology, Video Tagged: Powershell

    Read the article

  • F# for the C# Programmer

    - by mbcrump
    Are you a C# Programmer and can’t make it past a day without seeing or hearing someone mention F#?  Today, I’m going to walk you through your first F# application and give you a brief introduction to the language. Sit back this will only take about 20 minutes. Introduction Microsoft's F# programming language is a functional language for the .NET framework that was originally developed at Microsoft Research Cambridge by Don Syme. In October 2007, the senior vice president of the developer division at Microsoft announced that F# was being officially productized to become a fully supported .NET language and professional developers were hired to create a team of around ten people to build the product version. In September 2008, Microsoft released the first Community Technology Preview (CTP), an official beta release, of the F# distribution . In December 2008, Microsoft announced that the success of this CTP had encouraged them to escalate F# and it is now will now be shipped as one of the core languages in Visual Studio 2010 , alongside C++, C# 4.0 and VB. The F# programming language incorporates many state-of-the-art features from programming language research and ossifies them in an industrial strength implementation that promises to revolutionize interactive, parallel and concurrent programming. Advantages of F# F# is the world's first language to combine all of the following features: Type inference: types are inferred by the compiler and generic definitions are created automatically. Algebraic data types: a succinct way to represent trees. Pattern matching: a comprehensible and efficient way to dissect data structures. Active patterns: pattern matching over foreign data structures. Interactive sessions: as easy to use as Python and Mathematica. High performance JIT compilation to native code: as fast as C#. Rich data structures: lists and arrays built into the language with syntactic support. Functional programming: first-class functions and tail calls. Expressive static type system: finds bugs during compilation and provides machine-verified documentation. Sequence expressions: interrogate huge data sets efficiently. Asynchronous workflows: syntactic support for monadic style concurrent programming with cancellations. Industrial-strength IDE support: multithreaded debugging, and graphical throwback of inferred types and documentation. Commerce friendly design and a viable commercial market. Lets try a short program in C# then F# to understand the differences. Using C#: Create a variable and output the value to the console window: Sample Program. using System;   namespace ConsoleApplication9 {     class Program     {         static void Main(string[] args)         {             var a = 2;             Console.WriteLine(a);             Console.ReadLine();         }     } } A breeze right? 14 Lines of code. We could have condensed it a bit by removing the “using” statment and tossing the namespace. But this is the typical C# program. Using F#: Create a variable and output the value to the console window: To start, open Visual Studio 2010 or Visual Studio 2008. Note: If using VS2008, then please download the SDK first before getting started. If you are using VS2010 then you are already setup and ready to go. So, click File-> New Project –> Other Languages –> Visual F# –> Windows –> F# Application. You will get the screen below. Go ahead and enter a name and click OK. Now, you will notice that the Solution Explorer contains the following: Double click the Program.fs and enter the following information. Hit F5 and it should run successfully. Sample Program. open System let a = 2        Console.WriteLine a As Shown below: Hmm, what? F# did the same thing in 3 lines of code. Show me the interactive evaluation that I keep hearing about. The F# development environment for Visual Studio 2010 provides two different modes of execution for F# code: Batch compilation to a .NET executable or DLL. (This was accomplished above). Interactive evaluation. (Demo is below) The interactive session provides a > prompt, requires a double semicolon ;; identifier at the end of a code snippet to force evaluation, and returns the names (if any) and types of resulting definitions and values. To access the F# prompt, in VS2010 Goto View –> Other Window then F# Interactive. Once you have the interactive window type in the following expression: 2+3;; as shown in the screenshot below: I hope this guide helps you get started with the language, please check out the following books for further information. F# Books for further reading   Foundations of F# Author: Robert Pickering An introduction to functional programming with F#. Including many samples, this book walks through the features of the F# language and libraries, and covers many of the .NET Framework features which can be leveraged with F#.       Functional Programming for the Real World: With Examples in F# and C# Authors: Tomas Petricek and Jon Skeet An introduction to functional programming for existing C# developers written by Tomas Petricek and Jon Skeet. This book explains the core principles using both C# and F#, shows how to use functional ideas when designing .NET applications and presents practical examples such as design of domain specific language, development of multi-core applications and programming of reactive applications.

    Read the article

  • How to Buy an SD Card: Speed Classes, Sizes, and Capacities Explained

    - by Chris Hoffman
    Memory cards are used in digital cameras, music players, smartphones, tablets, and even laptops. But not all SD cards are created equal — there are different speed classes, physical sizes, and capacities to consider. Different devices require different types of SD cards. Here are the differences you’ll need to keep in mind when picking out the right SD card for your device. Speed Class In a nutshell, not all SD cards offer the same speeds. This matters for some tasks more than it matters for others. For example, if you’re a professional photographer taking photos in rapid succession on a DSLR camera saving them in high-resolution RAW format, you’ll want a fast SD card so your camera can save them as fast as possible. A fast SD card is also important if you want to record high-resolution video and save it directly to the SD card. If you’re just taking a few photos on a typical consumer camera or you’re just using an SD card to store some media files on your smartphone, the speed isn’t as important. Manufacturers use “speed classes” to measure an SD card’s speed. The SD Association that defines the SD card standard doesn’t actually define the exact speeds associated with these classes, but they do provide guidelines. There are four different speed classes — 10, 8, 4, and 2. 10 is the fastest, while 2 is the slowest. Class 2 is suitable for standard definition video recording, while classes 4 and 6 are suitable for high-definition video recording. Class 10 is suitable for “full HD video recording” and “HD still consecutive recording.” There are also two Ultra High Speed (UHS) speed classes, but they’re more expensive and are designed for professional use. UHS cards are designed for devices that support UHS. Here are the associated logos, in order from slowest to fastest:       You’ll probably be okay with a class 4 or 6 card for typical use in a digital camera, smartphone, or tablet. Class 10 cards are ideal if you’re shooting high-resolution videos or RAW photos. Class 2 cards are a bit on the slow side these days, so you may want to avoid them for all but the cheapest digital cameras. Even a cheap smartphone can record HD video, after all. An SD card’s speed class is identified on the SD card itself. You’ll also see the speed class on the online store listing or on the card’s packaging when purchasing it. For example, in the below photo, the middle SD card is speed class 4, while the two other cards are speed class 6. If you see no speed class symbol, you have a class 0 SD card. These cards were designed and produced before the speed class rating system was introduced. They may be slower than even a class 2 card. Physical Size Different devices use different sizes of SD cards. You’ll find standard-size CD cards, miniSD cards, and microSD cards. Standard SD cards are the largest, although they’re still very small. They measure 32x24x2.1 mm and weigh just two grams. Most consumer digital cameras for sale today still use standard SD cards. They have the standard “cut corner”  design. miniSD cards are smaller than standard SD cards, measuring 21.5x20x1.4 mm and weighing about 0.8 grams. This is the least common size today. miniSD cards were designed to be especially small for mobile phones, but we now have a smaller size. microSD cards are the smallest size of SD card, measuring 15x11x1 mm and weighing just 0.25 grams. These cards are used in most cell phones and smartphones that support SD cards. They’re also used in many other devices, such as tablets. SD cards will only fit into marching slots. You can’t plug a microSD card into a standard SD card slot — it won’t fit. However, you can purchase an adapter that allows you to plug a smaller SD card into a larger SD card’s form and fit it into the appropriate slot. Capacity Like USB flash drives, hard drives, solid-state drives, and other storage media, different SD cards can have different amounts of storage. But the differences between SD card capacities don’t stop there. Standard SDSC (SD) cards are 1 MB to 2 GB in size, or perhaps 4 GB in size — although 4 GB is non-standard. The SDHC standard was created later, and allows cards 2 GB to 32 GB in size. SDXC is a more recent standard that allows cards 32 GB to 2 TB in size. You’ll need a device that supports SDHC or SDXC cards to use them. At this point, the vast majority of devices should support SDHC. In fact, the SD cards you have are probably SDHC cards. SDXC is newer and less common. When buying an SD card, you’ll need to buy the right speed class, size, and capacity for your needs. Be sure to check what your device supports and consider what speed and capacity you’ll actually need. Image Credit: Ryosuke SEKIDO on Flickr, Clive Darra on Flickr, Steven Depolo on Flickr

    Read the article

  • How to Share Files Between User Accounts on Windows, Linux, or OS X

    - by Chris Hoffman
    Your operating system provides each user account with its own folders when you set up several different user accounts on the same computer. Shared folders allow you to share files between user accounts. This process works similarly on Windows, Linux, and Mac OS X. These are all powerful multi-user operating systems with similar folder and file permission systems. Windows On Windows, the “Public” user’s folders are accessible to all users. You’ll find this folder under C:\Users\Public by default. Files you place in any of these folders will be accessible to other users, so it’s a good way to share music, videos, and other types of files between users on the same computer. Windows even adds these folders to each user’s libraries by default. For example, a user’s Music library contains the user’s music folder under C:\Users\NAME\as well as the public music folder under C:\Users\Public\. This makes it easy for each user to find the shared, public files. It also makes it easy to make a file public — just drag and drop a file from the user-specific folder to the public folder in the library. Libraries are hidden by default on Windows 8.1, so you’ll have to unhide them to do this. These Public folders can also be used to share folders publically on the local network. You’ll find the Public folder sharing option under Advanced sharing settings in the Network and Sharing Control Panel. You could also choose to make any folder shared between users, but this will require messing with folder permissions in Windows. To do this, right-click a folder anywhere in the file system and select Properties. Use the options on the Security tab to change the folder’s permissions and make it accessible to different user accounts. You’ll need administrator access to do this. Linux This is a bit more complicated on Linux, as typical Linux distributions don’t come with a special user folder all users have read-write access to. The Public folder on Ubuntu is for sharing files between computers on a network. You can use Linux’s permissions system to give other user accounts read or read-write access to specific folders. The process below is for Ubuntu 14.04, but it should be identical on any other Linux distribution using GNOME with the Nautilus file manager. It should be similar for other desktop environments, too. Locate the folder you want to make accessible to other users, right-click it, and select Properties. On the Permissions tab, give “Others” the “Create and delete files” permission. Click the Change Permissions for Enclosed Files button and give “Others” the “Read and write” and “Create and Delete Files” permissions. Other users on the same computer will then have read and write access to your folder. They’ll find it under /home/YOURNAME/folder under Computer. To speed things up, they can create a link or bookmark to the folder so they always have easy access to it. Mac OS X Mac OS X creates a special Shared folder that all user accounts have access to. This folder is intended for sharing files between different user accounts. It’s located at /Users/Shared. To access it, open the Finder and click Go > Computer. Navigate to Macintosh HD > Users > Shared. Files you place in this folder can be accessed by any user account on your Mac. These tricks are useful if you’re sharing a computer with other people and you all have your own user accounts — maybe your kids have their own limited accounts. You can share a music library, downloads folder, picture archive, videos, documents, or anything else you like without keeping duplicate copies.

    Read the article

  • How to Use Steam In-Home Streaming

    - by Chris Hoffman
    Steam’s In-Home Streaming is now available to everyone, allowing you to stream PC games from one PC to another PC on the same local network. Use your gaming PC to power your laptops and home theater system. This feature doesn’t allow you to stream games over the Internet, only the same local network. Even if you tricked Steam, you probably wouldn’t get good streaming performance over the Internet. Why Stream? When you use Steam In-Home streaming, one PC sends its video and audio to another PC. The other PC views the video and audio like it’s watching a movie, sending back mouse, keyboard, and controller input to the other PC. This allows you to have a fast gaming PC power your gaming experience on slower PCs. For example, you could play graphically demanding games on a laptop in another room of your house, even if that laptop has slower integrated graphics. You could connect a slower PC to your television and use your gaming PC without hauling it into a different room in your house. Streaming also enables cross-platform compatibility. You could have a Windows gaming PC and stream games to a Mac or Linux system. This will be Valve’s official solution for compatibility with old Windows-only games on the Linux (Steam OS) Steam Machines arriving later this year. NVIDIA offers their own game streaming solution, but it requires certain NVIDIA graphics hardware and can only stream to an NVIDIA Shield device. How to Get Started In-Home Streaming is simple to use and doesn’t require any complex configuration — or any configuration, really. First, log into the Steam program on a Windows PC. This should ideally be a powerful gaming PC with a powerful CPU and fast graphics hardware. Install the games you want to stream if you haven’t already — you’ll be streaming from your PC, not from Valve’s servers. (Valve will eventually allow you to stream games from Mac OS X, Linux, and Steam OS systems, but that feature isn’t yet available. You can still stream games to these other operating systems.) Next, log into Steam on another computer on the same network with the same Steam username. Both computers have to be on the same subnet of the same local network. You’ll see the games installed on your other PC in the Steam client’s library. Click the Stream button to start streaming a game from your other PC. The game will launch on your host PC, and it will send its audio and video to the PC in front of you. Your input on the client will be sent back to the server. Be sure to update Steam on both computers if you don’t see this feature. Use the Steam > Check for Updates option within Steam and install the latest update. Updating to the latest graphics drivers for your computer’s hardware is always a good idea, too. Improving Performance Here’s what Valve recommends for good streaming performance: Host PC: A quad-core CPU for the computer running the game, minimum. The computer needs enough processor power to run the game, compress the video and audio, and send it over the network with low latency. Streaming Client: A GPU that supports hardware-accelerated H.264 decoding on the client PC. This hardware is included on all recent laptops and PCs. Ifyou have an older PC or netbook, it may not be able to decode the video stream quickly enough. Network Hardware: A wired network connection is ideal. You may have success with wireless N or AC networks with good signals, but this isn’t guaranteed. Game Settings: While streaming a game, visit the game’s setting screen and lower the resolution or turn off VSync to speed things up. In-Home Steaming Settings: On the host PC, click Steam > Settings and select In-Home Streaming to view the In-Home Streaming settings. You can modify your streaming settings to improve performance and reduce latency. Feel free to experiment with the options here and see how they affect performance — they should be self-explanatory. Check Valve’s In-Home Streaming documentation for troubleshooting information. You can also try streaming non-Steam games. Click Games > Add a Non-Steam Game to My Library on your host PC and add a PC game you have installed elsewhere on your system. You can then try streaming it from your client PC. Valve says this “may work but is not officially supported.” Image Credit: Robert Couse-Baker on Flickr, Milestoned on Flickr

    Read the article

  • Online Password Security Tactics

    - by BuckWoody
    Recently two more large databases were attacked and compromised, one at the popular Gawker Media sites and the other at McDonald’s. Every time this kind of thing happens (which is FAR too often) it should remind the technical professional to ensure that they secure their systems correctly. If you write software that stores passwords, it should be heavily encrypted, and not human-readable in any storage. I advocate a different store for the login and password, so that if one is compromised, the other is not. I also advocate that you set a bit flag when a user changes their password, and send out a reminder to change passwords if that bit isn’t changed every three or six months.    But this post is about the *other* side – what to do to secure your own passwords, especially those you use online, either in a cloud service or at a provider. While you’re not in control of these breaches, there are some things you can do to help protect yourself. Most of these are obvious, but they contain a few little twists that make the process easier.   Use Complex Passwords This is easily stated, and probably one of the most un-heeded piece of advice. There are three main concepts here: ·         Don’t use a dictionary-based word ·         Use mixed case ·         Use punctuation, special characters and so on   So this: password Isn’t nearly as safe as this: P@ssw03d   Of course, this only helps if the site that stores your password encrypts it. Gawker does, so theoretically if you had the second password you’re in better shape, at least, than the first. Dictionary words are quickly broken, regardless of the encryption, so the more unusual characters you use, and the farther away from the dictionary words you get, the better.   Of course, this doesn’t help, not even a little, if the site stores the passwords in clear text, or the key to their encryption is broken. In that case…   Use a Different Password at Every Site What? I have hundreds of sites! Are you kidding me? Nope – I’m not. If you use the same password at every site, when a site gets attacked, the attacker will store your name and password value for attacks at other sites. So the only safe thing to do is to use different names or passwords (or both) at each site. Of course, most sites use your e-mail as a username, so you’re kind of hosed there. So even though you have hundreds of sites you visit, you need to have at least a different password at each site.   But it’s easier than you think – if you use an algorithm.   What I’m describing is to pick a “root” password, and then modify that based on the site or purpose. That way, if the site is compromised, you can still use that root password for the other sites.   Let’s take that second password: P@ssw03d   And now you can append, prepend or intersperse that password with other characters to make it unique to the site. That way you can easily remember the root password, but make it unique to the site. For instance, perhaps you read a lot of information on Gawker – how about these:   P@ssw03dRead ReadP@ssw03d PR@esasdw03d   If you have lots of sites, tracking even this can be difficult, so I recommend you use password software such as Password Safe or some other tool to have a secure database of your passwords at each site. DO NOT store this on the web. DO NOT use an Office document (Microsoft or otherwise) that is “encrypted” – the encryption office automation packages use is very trivial, and easily broken. A quick web search for tools to do that should show you how bad a choice this is.   Change Your Password on a Schedule I know. It’s a real pain. And it doesn’t seem worth it…until your account gets hacked. A quick note here – whenever a site gets hacked (and I find out about it) I change the password at that site immediately (or quit doing business with them) and then change the root password on every site, as quickly as I can.   If you follow the tip above, it’s not as hard. Just add another number, year, month, day, something like that into the mix. It’s not unlike making a Primary Key in an RDBMS.   P@ssw03dRead10242010   Change the site, and then update your password database. I do this about once a month, on the first or last day, during staff meetings. (J)   If you have other tips, post them here. We can all learn from each other on this.

    Read the article

  • .NET Security Part 3

    - by Simon Cooper
    You write a security-related application that allows addins to be used. These addins (as dlls) can be downloaded from anywhere, and, if allowed to run full-trust, could open a security hole in your application. So you want to restrict what the addin dlls can do, using a sandboxed appdomain, as explained in my previous posts. But there needs to be an interaction between the code running in the sandbox and the code that created the sandbox, so the sandboxed code can control or react to things that happen in the controlling application. Sandboxed code needs to be able to call code outside the sandbox. Now, there are various methods of allowing cross-appdomain calls, the two main ones being .NET Remoting with MarshalByRefObject, and WCF named pipes. I’m not going to cover the details of setting up such mechanisms here, or which you should choose for your specific situation; there are plenty of blogs and tutorials covering such issues elsewhere. What I’m going to concentrate on here is the more general problem of running fully-trusted code within a sandbox, which is required in most methods of app-domain communication and control. Defining assemblies as fully-trusted In my last post, I mentioned that when you create a sandboxed appdomain, you can pass in a list of assembly strongnames that run as full-trust within the appdomain: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); Any assembly that is loaded into the sandbox with a strong name the same as one in the list of full-trust strong names is unconditionally given full-trust permissions within the sandbox, irregardless of permissions and sandbox setup. This is very powerful! You should only use this for assemblies that you trust as much as the code creating the sandbox. So now you have a class that you want the sandboxed code to call: // within assemblyWithApi public class MyApi { public static void MethodToDoThings() { ... } } // within the sandboxed dll public class UntrustedSandboxedClass { public void DodgyMethod() { ... MyApi.MethodToDoThings(); ... } } However, if you try to do this, you get quite an ugly exception: MethodAccessException: Attempt by security transparent method ‘UntrustedSandboxedClass.DodgyMethod()’ to access security critical method ‘MyApi.MethodToDoThings()’ failed. Security transparency, which I covered in my first post in the series, has entered the picture. Partially-trusted code runs at the Transparent security level, fully-trusted code runs at the Critical security level, and Transparent code cannot under any circumstances call Critical code. Security transparency and AllowPartiallyTrustedCallersAttribute So the solution is easy, right? Make MethodToDoThings SafeCritical, then the transparent code running in the sandbox can call the api: [SecuritySafeCritical] public static void MethodToDoThings() { ... } However, this doesn’t solve the problem. When you try again, exactly the same exception is thrown; MethodToDoThings is still running as Critical code. What’s going on? By default, a fully-trusted assembly always runs Critical code, irregardless of any security attributes on its types and methods. This is because it may not have been designed in a secure way when called from transparent code – as we’ll see in the next post, it is easy to open a security hole despite all the security protections .NET 4 offers. When exposing an assembly to be called from partially-trusted code, the entire assembly needs a security audit to decide what should be transparent, safe critical, or critical, and close any potential security holes. This is where AllowPartiallyTrustedCallersAttribute (APTCA) comes in. Without this attribute, fully-trusted assemblies run Critical code, and partially-trusted assemblies run Transparent code. When this attribute is applied to an assembly, it confirms that the assembly has had a full security audit, and it is safe to be called from untrusted code. All code in that assembly runs as Transparent, but SecurityCriticalAttribute and SecuritySafeCriticalAttribute can be applied to individual types and methods to make those run at the Critical or SafeCritical levels, with all the restrictions that entails. So, to allow the sandboxed assembly to call the full-trust API assembly, simply add APCTA to the API assembly: [assembly: AllowPartiallyTrustedCallers] and everything works as you expect. The sandboxed dll can call your API dll, and from there communicate with the rest of the application. Conclusion That’s the basics of running a full-trust assembly in a sandboxed appdomain, and allowing a sandboxed assembly to access it. The key is AllowPartiallyTrustedCallersAttribute, which is what lets partially-trusted code call a fully-trusted assembly. However, an assembly with APTCA applied to it means that you have run a full security audit of every type and member in the assembly. If you don’t, then you could inadvertently open a security hole. I’ll be looking at ways this can happen in my next post.

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • SQLAuthority News – 7th Anniversary of Blog – A Personal Note

    - by Pinal Dave
    Special Day Today is a very special day – seven years ago I blogged for the very first time.  Seven years ago, I didn’t know what I was doing, I didn’t know how to blog, or even what a blog was or what to write.  I was working as a DBA, and I was trying to solve a problem – at my job, there were a few issues I had to fix again and again and again.  There were days when I was rewriting the same solution over and over, and there were times when I would get very frustrated because I could not write the same elegant solution that I had written before.  I came up with a solution to this problem – posting these solutions online, where I could access them whenever I needed them.  At that point, I had no idea what a blog was, or even how the internet worked, I had no idea that a blog would be visible to others.  Can you believe it? Google it on Yahoo! After a few posts on this “blog,” there was a surprise for me – an e-mail saying that someone had left me a comment.  I was surprised, because I didn’t even know you could comment on a blog!  I logged on and read my comment.  It said: “I like your script,but there is a small bug.  If you could fix it, it will run on multiple other versions of SQL Server.”  I was like, “wow, someone figured out how to find my blog, and they figured out how to fix my script!”  I found the bug, I fixed the script, and a wrote a thank you note to the guy.  My first question for him was: how did you figure it out – not the script, but how to find my blog?  He said he found it from Yahoo Search (this was in the time before Google, believe it or not). From that day, my life changed.  I wrote a few more posts, I got a few more comments, and I started to watch my traffic.  People were reading, commenting, and giving feedback.  At the end of the day, people enjoyed what I was writing.  This was a fantastic feeling!  I never thought I would be writing for others.  Even today, I don’t feel like I am writing for others, but that I am simply posting what I am learning every day.  From that very first day, I decided that I would not change my intent or my blog’s purpose. 72 Million Views – 2600 Posts – 57000 comments – 10 books – 9 courses Today, this blog is my habit, my addiction, my baby.  Every day I try to learn something new, and that lesson gets posted on the blog.  Lately there have been days where I am traveling for a full 24 hours, but even on those days I try to learn something new, and later when I have free time, I will still post it to the blog.  Because of this habit, this blog has over 72 millions views, I have written more than 2600 posts, and there are 57,000 comments and counting.  I have also written 10 books, 9 courses, and learned so many things.  This blog has given me back so much more than I ever put it into it.  It gave me an education, a reason to learn something new every day, and a way to connect to people.  I like to think of it as a learning chain, a relay where we all pass knowledge from one to another. Never Ending Journey When I started the blog, I thought I would write for a few days and stop, but now after seven years I haven’t stopped and I have no intention of stopping!  However, change happens, and for this blog it will start today.  This blog started as a single resource for SQL Server, but now it has grown beyond, to Sharepoint, Personal Development, Developer Training, MySQL, Big Data, and lots of other things.  Truly speaking, this blog is more than just SQL Server, and that was always my intention.  I named it “SQL Authority,” not “SQL Server Authority”!  Loudly and clearly, I would like to announce that I am going to go back to my roots and start writing more about SQL, more about big data, and more about the other technology like relational databases, MySQL, Oracle, and others.  My goal is not to become a comprehensive resource for every technology, my goal is to learn something new every day – and now it can be so much more than just SQL Server.  I will learn it, and post it here for you. I have written a very long post on this anniversary, but here is the summary: Thank You.  You all have been wonderful.  Seven years is a long journey, and it makes me emotional.  I have been “with” this blog before I met my wife, before we had our daughter.  This blog is like a fourth member of the family.  Keep reading, keep commenting, keep supporting.  Thank you all. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: About Me, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • Limiting Audit Exposure and Managing Risk – Q&A and Follow-Up Conversation

    - by Tanu Sood
    Thanks to all who attended the live ISACA webcast on Limiting Audit Exposure and Managing Risk with Metrics-Driven Identity Analytics. We were really fortunate to have Don Sparks from ISACA moderate the webcast featuring Stuart Lincoln, Vice President, IT P&L Client Services, BNP Paribas, North America and Neil Gandhi, Principal Product Manager, Oracle Identity Analytics. Stuart’s insights given the team’s role in providing IT for P&L Client Services and his tremendous experience in identity management and establishing sustainable compliance programs were true value-add at yesterday’s webcast. And if you are a healthcare organization looking to solve your compliance and security challenges, we recommend you join us for a live webcast on Tuesday, November 29 at 10 am PT. The webcast will feature experts from Kaiser Permanente, PricewaterhouseCoopers and Oracle and the focus of the discussion will be around the compliance challenges a healthcare organization faces and best practices for tackling those. Here are the details: Healthcare IT News Webcast: Managing Risk and Enforcing Compliance in Healthcare with Identity Analytics Tuesday, November 29, 201110:00 a.m. PT / 1:00 p.m. ET Register Today The ISACA webcast replay is now available on-demand and the slides are also available for download. Since we didn’t have time to address all the questions we received during the live Q&A portion of the webcast, we have captured responses to the remaining questions here. Please continue to provide us your feedback and insights from your experience in deploying identity compliance solutions. Q. Can you please clarify the mechanism utilized to populate the Identity Warehouse from each individual application's access management function / files? A. Oracle Identity Analytics (OIA) supports direct imports from applications. Data collection is based on Extract, Transform and Load (ETL) that eliminates the need to write connectors to different applications. Oracle Identity Analytics’ import engine supports complex entitlement feeds saved as either text files or XML. The imports can be scheduled on a periodic basis or triggered as needed. If the applications are synchronized with a user provisioning solution like Oracle Identity Manager, Oracle Identity Analytics has a seamless integration to pull in data from Oracle Identity Manager. Q.  Can you provide a short summary of the new features in your latest release of Oracle Identity Analytics? A. Oracle recently announced availability of enhanced Oracle Identity Analytics. This release focused on easing the certification process by offering risk analytics driven certification, advanced certification screens, business centric views and significant improvement in performance including 3X faster data imports, 3X faster certification campaign generation and advanced auto-certification features, that  will allow organizations to improve user productivity by up to 80%. Closed-loop risk feedback and IT policy monitoring with Oracle Identity Manager, a leading user provisioning solution, allows for more accurate certification reviews. And, OIA's improved performance enables customers to scale compliance initiatives supporting millions of user entitlements across thousands of applications, whether on premise or in the cloud, without compromising speed or integrity. Q. Will ISACA grant a CPE credit for attending this ISACA-sponsored webinar today? A. From ISACA: Hello and thank you for your interest in the 2011 ISACA Webinar Program!  Unfortunately, there are no CPEs offered for this program, archived or live.  We will be looking into the feasibility of offering them in the future.  Q. Would you be able to use this to help manage licenses for software? That is to say - could it track software that is not used by a user, thus eliminating the software license? A. OIA’s integration with Oracle Identity Manager, a leading user provisioning solution, allows organizations to detect ghost accounts or unused accounts via account reconciliation. Based on company’s policies, this could trigger an automated workflow for account deletion or asking for further investigation. Closed-loop feedback between the two solutions would then allow visibility into the complete audit trail of when the account was detected, the action taken, by whom, when and the current status. Q. We have quarterly attestations and .xls mechanisms are not working. Once the identity data is correlated in Identity Analytics, do you then automate access certification? A. OIA’s identity warehouse analyzes and correlates identity data across various resources that allows OIA to determine a user’s risk profile, who the access review request should go to, along with all the relevant access details of the user. The access certification manager gets notification on what to review, when and the relevant data is presented in a business friendly screen. Based on the result of the access certification process, actions are triggered and results recorded and archived. Access review managers have visual risk indicators that also allow them to prioritize access certification tasks and efforts. Q. How does Oracle Identity Analytics work with Cloud Security? A. For enterprises looking to build their own cloud(s), Oracle offers a set of security services that cloud developers can leverage including Oracle Identity Analytics.  For enterprises looking to manage their compliance requirements but without hosting those in-house and instead having a hosting provider offer managed Identity Management services to the organizations, Oracle Identity Analytics can be leveraged much the same way as you’d in an on-premise (within the enterprise) environment. In fact, organizations today are leveraging Oracle Identity Analytics to manage identity compliance in both these ways. Q. Would you recommend this as a cost effective solution for a smaller organization with @ 2,500 users? A. The key return-on-investment (ROI) on Oracle Identity Analytics is derived from automating compliance processes thereby eliminating administrative overhead, minimizing errors, maintaining cost- and time-effective sustainable compliance processes and minimizing audit exposures and penalties.  Of course, there are other tangible benefits that are derived from an Oracle Identity Analytics implementation as outlined in the webcast. For a quantitative analysis of your requirements and potential ROI calculation, we recommend you refer to the Forrester Study on Total Economic Impact of Oracle Identity Analytics. For an in-person discussion, please email Richard Caldwell.

    Read the article

  • What Keeps You from Changing Your Public IP Address and Wreaking Havoc on the Internet?

    - by Jason Fitzpatrick
    What exactly is preventing you (or anyone else) from changing their IP address and causing all sorts of headaches for ISPs and other Internet users? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Whitemage is curious about what’s preventing him from wantonly changing his IP address and causing trouble: An interesting question was asked of me and I did not know what to answer. So I’ll ask here. Let’s say I subscribed to an ISP and I’m using cable internet access. The ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let’s say, 60.61.62.75, and messing with another consumer’s internet access? For the sake of this argument, let’s say that this other IP address is also owned by the same ISP. Also, let’s assume that it’s possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gateway, a network address and a broadcast address. So that’s 3 addresses the ISP “loses” to you. That seems very wasteful for dynamically assigned IP addresses, which the majority of customers are. Could they simply be using static arps? ACLs? Other simple mechanisms? Two things to investigate here, why can’t we just go around changing our addresses, and is the assignment process as wasteful as it seems? The Answer SuperUser contributor Moses offers some insight: Cable modems aren’t like your home router (ie. they don’t have a web interface with simple point-and-click buttons that any kid can “hack” into). Cable modems are “looked up” and located by their MAC address by the ISP, and are typically accessed by technicians using proprietary software that only they have access to, that only runs on their servers, and therefore can’t really be stolen. Cable modems also authenticate and cross-check settings with the ISPs servers. The server has to tell the modem whether it’s settings (and location on the cable network) are valid, and simply sets it to what the ISP has it set it for (bandwidth, DHCP allocations, etc). For instance, when you tell your ISP “I would like a static IP, please.”, they allocate one to the modem through their servers, and the modem allows you to use that IP. Same with bandwidth changes, for instance. To do what you are suggesting, you would likely have to break into the servers at the ISP and change what it has set up for your modem. Could they simply be using static arps? ACLs? Other simple mechanisms? Every ISP is different, both in practice and how close they are with the larger network that is providing service to them. Depending on those factors, they could be using a combination of ACL and static ARP. It also depends on the technology in the cable network itself. The ISP I worked for used some form of ACL, but that knowledge was a little beyond my paygrade. I only got to work with the technician’s interface and do routine maintenance and service changes. What keeps me from changing this IP address to, let’s say, 60.61.62.75 and mess with another consumer’s internet access? Given the above, what keeps you from changing your IP to one that your ISP hasn’t specifically given to you is a server that is instructing your modem what it can and can’t do. Even if you somehow broke into the modem, if 60.61.62.75 is already allocated to another customer, then the server will simply tell your modem that it can’t have it. David Schwartz offers some additional insight with a link to a white paper for the really curious: Most modern ISPs (last 13 years or so) will not accept traffic from a customer connection with a source IP address they would not route to that customer were it the destination IP address. This is called “reverse path forwarding”. See BCP 38. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • .NET Security Part 4

    - by Simon Cooper
    Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains. DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong. 1. AppDomainSetup.ApplicationBase The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 – the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let’s explore what happens when they are the same, and an exception is thrown. In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it): public class UntrustedPlugin : MarshalByRefObject, IPlugin { // implements IPlugin.MethodToDoThings() public void MethodToDoThings() { throw new EvilException(); } } [Serializable] internal class EvilException : Exception { public override string ToString() { // show we have read access to C:\Windows // read the first 5 directories Console.WriteLine("Pwned! Mwuahahah!"); foreach (var d in Directory.EnumerateDirectories(@"C:\Windows").Take(5)) { Console.WriteLine(d.FullName); } return base.ToString(); } } And in the controlling assembly: // what can possibly go wrong? AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase } // only grant permissions to execute // and to read the application base, nothing else PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, appDomainSetup.ApplicationBase); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.pathDiscovery, appDomainSetup.ApplicationBase); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain("Sandbox", null, appDomainSetup, restrictedPerms); // execute UntrustedPlugin in the sandbox // don't crash the application if the sandbox throws an exception IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap("Sandboxed.dll", "UntrustedPlugin"); try { o.MethodToDoThings() } catch (Exception e) { Console.WriteLine(e.ToString()); } And the result? Oops. We’ve allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property: The application base directory is where the assembly manager begins probing for assemblies. When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly’s appdomain (as it’s marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain’s ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed. So the problem isn’t exactly that the sandboxed appdomain’s ApplicationBase is the same as the controlling appdomain’s, it’s that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox. The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don’t allow the sandbox permissions to access the controlling appdomain’s ApplicationBase directory. If you do this, then the sandboxed assembly can’t be accidentally loaded into the fully-trusted appdomain, and the code can’t be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn’t, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done. 2. Loading the sandboxed dll into the application appdomain As an extension of the previous point, you shouldn’t directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface). If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application’s appdomain. The code in assemblies in the reflection-only context can’t be executed, it can only be reflected upon, thus protecting your appdomain from malicious code. 3. Incorrectly asserting permissions You should only assert permissions when you are absolutely sure they’re safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc: [SecuritySafeCritical] public static string GetFileText(string filePath) { new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert(); return File.ReadAllText(filePath); } Be careful when asserting permissions, and ensure you’re not providing a loophole sandboxed dlls can use to gain access to things they shouldn’t be able to. Conclusion Hopefully, that’s given you an idea of some of the ways it’s possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn’t base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

    Read the article

  • Agile Testing Days 2012 – My First Conference!

    - by Chris George
    I’d like to give you a bit of background first… so please bear with me! In 1996, whilst studying for my final year of my degree, I applied for a job as a C++ Developer at a small software house in Hertfordshire  After bodging up the technical part of the interview I didn’t get the job, but was offered a position as a QA Engineer instead. The role sounded intriguing and the pay was pretty good so in the absence of anything else I took it. Here began my career in the world of software testing! Back then, testing/QA was often an afterthought, something that was bolted on to the development process and very much a second class citizen. Test automation was rare, and tools were basic or non-existent! The internet was just starting to take off, and whilst there might have been testing communities and resources, we were certainly not exposed to any of them. After 8 years I moved to another small company, and again didn’t find myself exposed to any of the changes that were happening in the industry. It wasn’t until I joined Red Gate in 2008 that my view of testing and software development as a whole started to expand. But it took a further 4 years for my view of testing to be totally blown open, and so the story really begins… In May 2012 I was fortunate to land the role of Head of Test Engineering. Soon after, I received an email with details for the “Agile Testi However, in my new role, I decided that it was time to bite the bullet and at least go to one conference. Perhaps I could get some new ideas to supplement and support some of the ideas I already had.ng Days” conference in Potsdam, Germany. I looked over the suggested programme and some of the talks peeked my interest. For numerous reasons I’d shied away from attending conferences in the past, one of the main ones being that I didn’t see much benefit in attending loads of talks when I could just read about stuff like that on the internet. So, on the 18th November 2012, myself and three other Red Gaters boarded a plane at Heathrow bound for Potsdam, Germany to attend Agile Testing Days 2012. Tutorial Day – “Software Testing Reloaded” We chose to do the tutorials on the 19th, I chose the one titled “Software Testing Reloaded – So you wanna actually DO something? We’ve got just the workshop for you. Now with even less powerpoint!”. With such a concise and serious title I just had to see what it was about! I nervously entered the room to be greeted by tables, chairs etc all over the place, not set out and frankly in one hell of a mess! There were a few people in there playing a game with dice. Okaaaay… this is going to be a long day! Actually the dice game was an exercise in deduction and simplification… I found it very interesting and is certainly something I’ll be using at work as a training exercise! (I won’t explain the game here cause I don’t want to let the cat out of the bag…) The tutorial consisted of several games, exploring different aspects of testing. They were all practical yet required a fair amount of thin king. Matt Heusser and Pete Walen were running the tutorial, and presented it in a very relaxed and light-hearted manner. It was really my first experience of working in small teams with testers from very different backgrounds, and it was really enjoyable. Matt & Pete were very approachable and offered advice where required whilst still making you work for the answers! One of the tasks was to devise several strategies for testing some electronic dice. The premise was that a Vegas casino wanted to use the dice to appeal to the twenty-somethings interested in tech, but needed assurance that they were as reliable and random as traditional dice. This was a very interesting and challenging exercise that forced us to challenge various assumptions, determine/clarify requirements but most of all it was frustrating because the dice made a very very irritating beeping noise. Multiple that by at least 12 dice and I was dreaming about them all that night!! Some of the main takeaways that were brilliantly demonstrated through the games were not to make assumptions, challenge requirements, and have fun testing! The tutorial lasted the whole day, but to be honest the day went very quickly! My introduction into the conference experience started very well indeed, and I would talk to both Matt and Pete several times during the 4 days. Days 1,2 & 3 will be coming soon…  

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Using Resources the Right Way

    - by BuckWoody
    It’s an interesting time in computing technology. At one point there was a dearth of information available for solving a given problem, or educating ourselves on broader topics so that we can solve problems in the future. With dozens, perhaps hundreds or thousands of web sites and content available (for free, in many cases) from vendors, peers, even colleges and universities, it seems like there is actually too much information. Who has the time to absorb all this information and training? Even if you had the inclination, where to start? In fact, it seems so overwhelming that I often hear people saying that they can’t find the training they need, or that vendor X or Y “doesn’t help their users”. On questioning these folks, however, I often find that they – and sometimes I - haven’t put in the effort to learn what resources we have. That’s where blogs, like this one, can help. If you follow a blog, either by checking it often or perhaps subscribing to the Really Simple Syndication (RSS) feed, you’ll be able to spread out the search or create a mental filter for the information you need. But it’s not enough just read a blog or a web page. The creators need real feedback – what doesn’t work, and what does. Yes, you’re allowed to tell a vendor or writer “This helped me because…” so that you reinforce the positives. To be sure, bring up what doesn’t work as well –  that’s fine. But be specific, and be constructive. You’d be surprised at how much it matters. I know for a fact at Microsoft we listen – there is a real live person that reads your comments. I’m sure this is true of other vendors, and I also know that most blog authors – yours truly most especially – wants to know what you think.   In this blog entry I’d to call your attention to three resources you have at your disposal, and how you can use them to help. I’ll try to bring up things like this from time to time that I find useful, and cover in them in more depth like this. Think of this as a synopsis of a longer set of resources that you can use to filter whether you want to research further, bookmark, or forward on to a circle of friends where you think it might help them.   Data Driven Design Concepts http://msdn.microsoft.com/en-us/library/windowsazure/jj156154 I’ll start with a great site that walks you through the process of designing a solution from a data-first perspective. As you know, I believe all computing is merely re-arranging data. If you follow that logic as well, you’ll realize that whenever you create a solution, you should start at the data-end of the application. This resource helps you do that. Even if you don’t use the specific technologies the instructions use, the concepts hold for almost any other technology that deals with data. This should be a definite bookmark for a developer, DBA, or Data Architect. When I mentioned my admiration for this resource here at Microsoft, the team that created it contacted me and asked if I’d share an e-mail address to my readers so that you can comment on it. You’re guaranteed to be heard – you can suggest changes, talk about how useful – or not – it is, and so on. Here’s that address:  [email protected]   End-to-End Example of a complete Hybrid Application – with Live Demo https://azurestocktrader.cloudapp.net/Default.aspx I learn by example. I also like having ready-made, live, functional demos that show the completed solution at work. If you’ve ever wanted to learn how a complex, complete, hybrid application that bridges on-premises systems with cloud-based databases, code, functions and more, this is it. It’s a stock-trading simulator, and you can get everything from the design to the code itself, or you can just play with the application. It’s running on Windows Azure, the actual production servers we use for everything else. Using a Cloud-Based Service https://azureconfigweb.cloudapp.net/Default.aspx Along with that stock-trading application, you have a full demonstration and usable code sample of a web-based service available. If you’re a developer, this is a style of code you need to understand for everything from iPhone development to a full Service-Oriented Architecture (SOA) environment. So check out these resources. I’ll post more from time to time as I run across them. Hopefully they’ll be as useful to you as they are to me. Oh, and if you have a comment on any of the resources, let them know. And if you have any comments about these or any of my entries, feel free to post away. To quote a famous TV Show: “Hello Seattle – I’m listening…”

    Read the article

  • Separating text strings into a table of individual words in SQL via XML.

    - by Phil Factor
    p.MsoNormal {margin-top:0cm; margin-right:0cm; margin-bottom:10.0pt; margin-left:0cm; line-height:115%; font-size:11.0pt; font-family:"Calibri","sans-serif"; } Nearly nine years ago, Mike Rorke of the SQL Server 2005 XML team blogged ‘Querying Over Constructed XML Using Sub-queries’. I remember reading it at the time without being able to think of a use for what he was demonstrating. Just a few weeks ago, whilst preparing my article on searching strings, I got out my trusty function for splitting strings into words and something reminded me of the old blog. I’d been trying to think of a way of using XML to split strings reliably into words. The routine I devised turned out to be slightly slower than the iterative word chop I’ve always used in the past, so I didn’t publish it. It was then I suddenly remembered the old routine. Here is my version of it. I’ve unwrapped it from its obvious home in a function or procedure just so it is easy to appreciate. What it does is to chop a text string into individual words using XQuery and the good old nodes() method. I’ve benchmarked it and it is quicker than any of the SQL ways of doing it that I know about. Obviously, you can’t use the trick I described here to do it, because it is awkward to use REPLACE() on 1…n characters of whitespace. I’ll carry on using my iterative function since it is able to tell me the location of each word as a character-offset from the start, and also because this method leaves punctuation in (removing it takes time!). However, I can see other uses for this in passing lists as input or output parameters, or as return values.   if exists (Select * from sys.xml_schema_collections where name like 'WordList')   drop XML SCHEMA COLLECTION WordList go create xml schema collection WordList as ' <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="words">        <xs:simpleType>               <xs:list itemType="xs:string" />        </xs:simpleType> </xs:element> </xs:schema>'   go   DECLARE @string VARCHAR(MAX) –we'll get some sample data from the great Ogden Nash Select @String='This is a song to celebrate banks, Because they are full of money and you go into them and all you hear is clinks and clanks, Or maybe a sound like the wind in the trees on the hills, Which is the rustling of the thousand dollar bills. Most bankers dwell in marble halls, Which they get to dwell in because they encourage deposits and discourage withdrawals, And particularly because they all observe one rule which woe betides the banker who fails to heed it, Which is you must never lend any money to anybody unless they don''t need it. I know you, you cautious conservative banks! If people are worried about their rent it is your duty to deny them the loan of one nickel, yes, even one copper engraving of the martyred son of the late Nancy Hanks; Yes, if they request fifty dollars to pay for a baby you must look at them like Tarzan looking at an uppity ape in the jungle, And tell them what do they think a bank is, anyhow, they had better go get the money from their wife''s aunt or ungle. But suppose people come in and they have a million and they want another million to pile on top of it, Why, you brim with the milk of human kindness and you urge them to accept every drop of it, And you lend them the million so then they have two million and this gives them the idea that they would be better off with four, So they already have two million as security so you have no hesitation in lending them two more, And all the vice-presidents nod their heads in rhythm, And the only question asked is do the borrowers want the money sent or do they want to take it withm. Because I think they deserve our appreciation and thanks, the jackasses who go around saying that health and happi- ness are everything and money isn''t essential, Because as soon as they have to borrow some unimportant money to maintain their health and happiness they starve to death so they can''t go around any more sneering at good old money, which is nothing short of providential. '   –we now turn it into XML declare @xml_data xml(WordList)  set @xml_data='<words>'+ replace(@string,'&', '&amp;')+'</words>'    select T.ref.value('.', 'nvarchar(100)')  from (Select @xml_data.query('                      for $i in data(/words) return                      element li { $i }               '))  A(list) cross apply A.List.nodes('/li') T(ref)     …which gives (truncated, of course)…

    Read the article

  • Some Early Considerations

    - by Chris Massey
    Following on from my previous post, I want to say "thank you" to everyone who has got in touch and got involved – you are pioneers! An update on where we are right now: paper prototypes v1 To be more specific, we’ve picked two of the ideas that seem to have more pros than cons, turned them into Balsamiq mockups, and are getting them fleshed out with realistic content. We’ll initially make these available to the aforementioned pioneers (thank you again), roll in the feedback, and then open up to get more data on what works and what doesn’t. If you’ve got any questions about this (or what we’re working on right now), feel free to ask me in the comments below. I’ve had a few people express an interest in the process we’re going through, and I’m more than happy to share details more frequently as we go along – not least because you, dear reader, will help us stay on target and create something Good. To start with, here’s a quick flashback to bring you all up to speed. A Brief Retrospective As you may already know, we’re creating a new publishing asset specifically focused on providing great content for web developers. We don’t yet know exactly what this thing will look like, or exactly how it will work, but we know we want to create something that is useful different. For my part, I’m seriously excited at the prospect of building a genuinely digital publishing system (as opposed to what most publishing is these days, which is print-style publishing which just happens to be on the web). The main challenge at this point is working out our build-measure-assess loop to speed up our experimental turn-around, and that’ll get better as we run more trials. Of course, there are a few things we’ve been pondering at this early conceptual stage: Do we publishing about heterogeneous technology stacks from day 1, or do we start with ASP.NET (which we’re familiar with) & branch out later? There are challenges with either approach. What publishing "modes" are already being well-handled? For example, the likes of Pluralsight, TekPub, and Treehouse have pretty much nailed video training (debate about price, if you like), and unless we think we can do it faster / better / cheaper (unlikely, for the record), we should leave them to it. Where should we base whatever we create? Should we create a completely new asset under a new name, graft something onto Simple-Talk (like the labs), or just build something directly into Simple-Talk? It sounds trivial, but it does have at least some impact on infrastructure and what how we manage the different types of content we (will) have. Are there any obvious problems or niches that we think could address really well, or should we just throw ideas out and see what readers respond to? What kind of users do we want to provide for? This actually deserves a little bit of unpacking… Why are you here? We currently divide readers into (broadly) the categories: Category 1: I know nothing about X, and I’d like to learn about it. Category 2: I know something about X, but I’d like to learn how to do something specific with it. Category 3: Ah man, I have a problem with X, and I need to fix it now. Now that I think about it, I might also include a 4th class of reader: Category 4: I’m looking for something interesting to engage my brain. These are clearly task-based categorizations, and depending on which task you’re performing when you arrive here, you’re going to need different types of content, or will have specific discovery needs. One of the questions that’s at the back of my mind whenever I consider a new idea is “How many of the categories will this satisfy?” As an example, typical video training is very well suited to categories 1, 2, and 4. StackOverflow is very well suited to category 3, and serves as a sign-posting system to the rest. Clearly it’s not necessary to satisfy every category need to be useful and popular, but being aware of what behavior readers might be exhibiting when they arrive will help us tune our ideas appropriately. < / Flashback > We don’t have clean answers to most of these considerations – they’re things we’re aware of, and each idea we look at is going to be best suited to a different mix of the options I’ve described. Our first experimental loop will be coming full circle in the next few days, so we should start to see how the different possibilities vary between ideas. Free to chime in with questions and suggestions about anything I’ve just brain-dumped, or at any stage as we go along. If you see anything that intrigued or enrages you, or just have an idea you’d like to share, I’d love to hear from you.

    Read the article

  • updating/refereshing dojo datagrid with new store value on combobox value changes

    - by Raj
    hey all, I have a combo box and a datagrid in my page. when the user changes the combo box value i have to update the grid with children details of newly selected parent. How can I achieve this using Dojo combo box and datagrid. the following code snippet not working for me. when I use setStore method on the grid with new json data. <div dojoType="dojo.data.ItemFileReadStore" jsId="store" url="/child/index/"></div> // grid store <div dojoType="dojo.data.ItemFileReadStore" jsId="parentStore" url="/parent/index/"></div> // combo box store //combo box <input dojoType="dijit.form.ComboBox" value="Select" width="auto" store="parentStore" searchAttr="name" name="parent" id="parent" onchange="displayChildren()"> //MY GRID <table dojoType="dojox.grid.DataGrid" jsId="grid" store="store" id="display_grid" query="{ child_id: '*' }" rowsPerPage="2" clientSort="true" singleClickEdit="false" style="width: 90%; height: 400px;" rowSelector="20px" selectionMode="multiple"> <thead> <tr> <th field="child_id" name="ID" width="auto" editable="false" hidden="true">Text</th> <th field="parent_id" name="Parent" width="auto" editable="false" hidden="true">Text</th> <th field="child_name" name="child" width="300px" editable="false">Text</th> <th field="created" name="Created Date" width="200px" editable="false" cellType='dojox.grid.cells.DateTextBox' datePattern='dd-MMM-yyyy'></th> <th field="last_updated" name="Updated Date" width="200px" editable="false" cellType='dojox.grid.cells.DateTextBox' datePattern='dd-MMM-yyyy'></th> <th field="child_id" name="Edit/Update" formatter="fmtEdit"></th> </tr> </thead> </table> //onchange method of parent combo box in which i am trying to reload the grid with new data from the server. function displayChildren() { var selected = dijit.byId("parent").attr("value"); var grid = dojo.byId('display_grid'); var Url = "/childsku/index/parent/" + selected; grid.setStore(new dojo.data.ItemFileReadStore({ url: Url })); } But its not updating my grid with new contents. I don know how to refresh the grid every time users changes the combo box value. Could anyone help me to solve this issue... I would be glad if I get the solution for both ItemFileReadStore and ItemFileWrireStore. Thanks Raj..

    Read the article

  • In Email, Image (img) Source (src) Tags are rewritten as relative links. How to fix?

    - by Noah Goodrich
    I'm working on sending out an html based email, and every time it sends the image src tags and some of the anchor href tags are modified to be relative url's. Update 2: This is happening between when the body of the email is generated and sent and when it arrives in my inbox. Update: I am using Postfix on a LAMPP server. In addition, I am using Zend_Mail to send the emails out. For example, I have a link: src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/header.jpg" And it gets rewritten as: src="../../../../images/email/highpoint_2009_04/header.jpg" What can cause this to occur and how is it corrected? Email headers: Return-Path: <[email protected]> X-Original-To: [email protected] Delivered-To: [email protected] Received: by mail.example.com (Postfix, from userid 0) id 6BF012252; Tue, 14 Apr 2009 12:15:20 -0600 (MDT) To: Gabriel <[email protected]> Subject: Free Map to Sales Success From: Somebody <[email protected]> Date: Tue, 14 Apr 2009 12:15:20 -0600 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: multipart/related Content-Disposition: inline Message-Id: <[email protected]> Original content to be sent out: <table align="center" border="0" cellpadding="0" cellspacing="0" width="600"> <tbody> <tr> <td valign="top"> <a href="http://www.furnituretrainingcompany.com"> <img moz-do-not-send="true" alt="The Furniture Training Company - Know More. Sell More." src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/header.jpg" border="0" height="123" width="600"> </a> </td> </tr> </tbody> </table> <table align="center" border="0" cellpadding="0" cellspacing="0" width="600"> <tbody> <tr> <td valign="top"><img alt="Visit us at High Point to receive your free training poster" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/hero.jpg" moz-do-not-send="true" height="150" width="600"><br> </td> </tr> </tbody> </table> <table align="center" border="0" cellpadding="0" cellspacing="0" width="600"> <tbody> <tr> <td bgcolor="#ffffff" valign="top"><img alt="" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/spacer_content_left.jpg" moz-do-not-send="true" height="30" width="30"><br> </td> <td bgcolor="#ffffff" valign="top"><font originaltag="yes" style="font-size: 9px; font-family: Verdana,Arial,Helvetica,sans-serif;" color="#000000" face="Verdana, Arial, Helvetica, sans-serif" size="1"><big><big><big><big><small><big><b>See you at Market</b></big><br> </small></big></big></big></big></font> <font originaltag="yes" style="font-size: 9px; font-family: Verdana,Arial,Helvetica,sans-serif;" color="#000000" face="Verdana, Arial, Helvetica, sans-serif" size="1"><big><big><big><big><small><br> </small></big></big></big></big></font><small><font face="Helvetica, Arial, sans-serif">Visit our space to get your free Map to Sales Success poster! This unique 24 X 36 color poster is your guide to developing high volume salespeople with larger tickets. Find us in the new NHFA Retailer Resource Center located in the Plaza. <br> <br> Don&#8217;t miss Mark Lacy&#8217;s entertaining seminar "Help Wanted! My Sales Associates Can&#8217;t Sell Water to a Thirsty Camel." He&#8217;ll reveal powerful secrets for turning sales associates into furniture experts that will sell. See him Saturday, April 25th at 11:30 AM in the seminar room of the new NHFA Retail Resource Center in the Plaza. <br> <br> Stop by our space to learn how our ingenious internet-delivered training courses are easy to use, guaranteed to work, and cheaper than the daily donuts. Over 95% report increased sales. <br> <br> Plan to see us at High Point. </font></small> <font originaltag="yes" style="font-size: 9px; font-family: Verdana,Arial,Helvetica,sans-serif;" color="#000000" face="Verdana, Arial, Helvetica, sans-serif" size="1"><big><big><big><big><small><small><br> <br> <br> <br> </small></small></big></big></big></big></font><small><font originaltag="yes" style="font-size: 9px; font-family: Verdana,Arial,Helvetica,sans-serif;" color="#000000" face="Verdana, Arial, Helvetica, sans-serif" size="1"><big><big><big><small> </small></big></big></big></font></small> <a href="http://www.furnituretrainingcompany.com/map"><img alt="Find out more" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/image_content_left.jpg" moz-do-not-send="true" border="0" height="67" width="326"></a><br> <br> </td> <td bgcolor="#ffffff" valign="top"> <img alt="" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/spacer_content_middle.jpg" moz-do-not-send="true" height="28" width="28"><br> </td> <td bgcolor="#ffffff" valign="top"><img alt="Roadmap to Sales Success poster" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/image_content_right.jpg" moz-do-not-send="true" height="267" width="186"><br> <font face="Helvetica, Arial, sans-serif"><small><font originaltag="yes" style="font-size: 9px; font-family: Verdana,Arial,Helvetica,sans-serif;" color="#000000" size="1"><big><big><big><small><b>Road Map to Sales Success<br> </b><br> </small></big></big></big></font>This beautiful poster is yours free for simply stopping by and visiting with us at High Point. <span class="moz-txt-slash">Our space is located inside the </span>new NHFA Retailer Resource Center in the Plaza Suites, 222 South Main St, 1st Floor. We will be at market from Sat April 25th until Thur April 30th. </small></font><br> </td> <td bgcolor="#ffffff" valign="top"><img alt="" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/spacer_content_right.jpg" moz-do-not-send="true" height="30" width="30"><br> <br> </td> </tr> </tbody> </table> <table align="center" border="0" cellpadding="0" cellspacing="0" width="600"> <tbody> <tr> <td bgcolor="#ffffff" valign="top"><img alt="" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/disclaimer_divider.jpg" moz-do-not-send="true" height="25" width="600"><br> </td> </tr> </tbody> </table> <table align="center" border="0" cellpadding="0" cellspacing="0" width="600"> <tbody> <tr> <td bgcolor="#ffffff" valign="top"><img alt="" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/spacer_disclaimer_left.jpg" moz-do-not-send="true"></td> <td bgcolor="#ffffff" valign="top"><img alt="" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/spacer_disclaimer_middle.jpg" moz-do-not-send="true"><br> <font originaltag="yes" style="font-size: 9px; font-family: Verdana,Arial,Helvetica,sans-serif;" color="#666666" face="Verdana, Arial, Helvetica, sans-serif" size="1"><big><big><big><big><small><small><small>If you are not attending the High Point market in April but would still like to receive a free Road Map to Sales Success poster visit us on the web at <u><a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="http://www.furnituretrainingcompany.com">www.furnituretrainingcompany.com</a></u>, or to speak with a Furniture Training Company representative, call toll free (866) 755-5996. We do not offer free shipping outside of the U.S. and Canada. Retailers outside of the U.S. and Canada may call for more information. Limit one free Road Map to Sales Success per company. Other copies of the poster may be purchased on our web site.<br> <br> </small></small></small></big></big></big></big></font> <font color="#666666"><small><font originaltag="yes" style="font-size: 9px; font-family: Verdana,Arial,Helvetica,sans-serif;" face="Verdana, Arial, Helvetica, sans-serif" size="1"><big><big><big><small><small>We hope you found this message to be useful. However, if you'd rather not receive future emails of this sort from The Furniture Training Company, please <a moz-do-not-send="true" href="http://www.furnituretraining.com/contact">click here to unsubscribe</a>.<br> <br> </small></small></big></big></big></font></small><small><font originaltag="yes" style="font-size: 9px; font-family: Verdana,Arial,Helvetica,sans-serif;" face="Verdana, Arial, Helvetica, sans-serif" size="1"><big><big><big><small><small>&copy;Copyright 2009 The Furniture Training Company.<br> 1770 North Research Park Way, <br> North Logan, UT 84341. <br> All Rights Reserved.</small></small></big></big></big></font></small></font><br> </td> <td bgcolor="#ffffff" valign="top"><img alt="" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/spacer_disclaimer_right.jpg" moz-do-not-send="true"></td> </tr> </tbody> </table> <table align="center" border="0" cellpadding="0" cellspacing="0" width="600"> <tbody> <tr> <td bgcolor="#ffffff" valign="top"><img alt="" src="http://www.furnituretrainingcompany.com/images/email/highpoint_2009_04/footer.jpg" moz-do-not-send="true"> </td> </tr> </tbody> </table> <br> <br> Content that gets sent: <table border=3D"0" cellspacing=3D"0" cellpadding=3D"0" width=3D"600" al= ign=3D"center">=0D=0A<tbody>=0D=0A<tr>=0D=0A<td valign=3D"top"><a href= =3D"http://www.furnituretrainingcompany.com"> <img src=3D"http://www.fur= nituretrainingcompany.com/images/email/highpoint_2009_04/header.jpg" bor= der=3D"0" alt=3D"The Furniture Training Company - Know More. Sell More."= width=3D"600" height=3D"123" /> </a></td>=0D=0A</tr>=0D=0A</tbody>=0D= =0A</table>=0D=0A<table border=3D"0" cellspacing=3D"0" cellpadding=3D"0"= width=3D"600" align=3D"center">=0D=0A<tbody>=0D=0A<tr>=0D=0A<td valign= =3D"top"><img src=3D"http://www.furnituretrainingcompany.com/images/emai= l/highpoint_2009_04/hero.jpg" alt=3D"Visit us at High Point to receive y= our free training poster" width=3D"600" height=3D"150" /><br /></td>=0D= =0A</tr>=0D=0A</tbody>=0D=0A</table>=0D=0A<table border=3D"0" cellspacin= g=3D"0" cellpadding=3D"0" width=3D"600" align=3D"center">=0D=0A<tbody>= =0D=0A<tr>=0D=0A<td valign=3D"top" bgcolor=3D"#ffffff"><img src=3D"http:= //www.furnituretrainingcompany.com/images/email/highpoint_2009_04/spacer= _content_left.jpg" alt=3D"" width=3D"30" height=3D"30" /><br /></td>=0D= =0A<td valign=3D"top" bgcolor=3D"#ffffff"><span style=3D"font-size: xx-s= mall; font-family: Verdana,Arial,Helvetica,sans-serif; color: #000000;">= <big><big><big><big><small><big><strong>See you at Market</strong></big>= <br /> </small></big></big></big></big></span> <span style=3D"font-size:= xx-small; font-family: Verdana,Arial,Helvetica,sans-serif; color: #0000= 00;"><big><big><big><big><small><br /> </small></big></big></big></big><= /span><small><span style=3D"font-family: Helvetica,Arial,sans-serif;">Vi= sit our space to get your free Map to Sales Success poster! This unique= 24 X 36 color poster is your guide to developing high volume salespeopl= e with larger tickets. Find us in the new NHFA Retailer Resource Center= located in the Plaza. <br /> <br /> Don&rsquo;t miss Mark Lacy&rsquo;s= entertaining seminar "Help Wanted! My Sales Associates Can&rsquo;t Sell= Water to a Thirsty Camel." He&rsquo;ll reveal powerful secrets for turn= ing sales associates into furniture experts that will sell. See him Satu= rday, April 25th at 11:30 AM in the seminar room of the new NHFA Retail= Resource Center in the Plaza. <br /> <br /> Stop by our space to learn= how our ingenious internet-delivered training courses are easy to use,= guaranteed to work, and cheaper than the daily donuts. Over 95% report= increased sales. <br /> <br /> Plan to see us at High Point. </span></s= mall> <span style=3D"font-size: xx-small; font-family: Verdana,Arial,Hel= vetica,sans-serif; color: #000000;"><big><big><big><big><small><small><b= r /> <br /> <br /> <br /> </small></small></big></big></big></big></span= ><small><span style=3D"font-size: xx-small; font-family: Verdana,Arial,H= elvetica,sans-serif; color: #000000;"><big><big><big><small> </small></b= ig></big></big></span></small> <a href=3D"http://www.furnituretrainingco= mpany.com/map"><img src=3D"http://www.furnituretrainingcompany.com/image= s/email/highpoint_2009_04/image_content_left.jpg" border=3D"0" alt=3D"Fi= nd out more" width=3D"326" height=3D"67" /></a><br /> <br /></td>=0D=0A<= td valign=3D"top" bgcolor=3D"#ffffff"><img src=3D"http://www.furnituretr= ainingcompany.com/images/email/highpoint_2009_04/spacer_content_middle.j= pg" alt=3D"" width=3D"28" height=3D"28" /><br /></td>=0D=0A<td valign=3D= "top" bgcolor=3D"#ffffff"><img src=3D"http://www.furnituretrainingcompan= y.com/images/email/highpoint_2009_04/image_content_right.jpg" alt=3D"Roa= dmap to Sales Success poster" width=3D"186" height=3D"267" /><br /> <spa= n style=3D"font-family: Helvetica,Arial,sans-serif;"><small><span style= =3D"font-size: xx-small; color: #000000;"><big><big><big><small><strong>= Road Map to Sales Success<br /> </strong><br /> </small></big></big></bi= g></span>This beautiful poster is yours free for simply stopping by and= visiting with us at High Point. <span class=3D"moz-txt-slash">Our space= is located inside the </span>new NHFA Retailer Resource Center in the P= laza Suites, 222 South Main St, 1st Floor. We will be at market from Sat= April 25th until Thur April 30th. </small></span><br /></td>=0D=0A<td v= align=3D"top" bgcolor=3D"#ffffff"><img src=3D"http://www.furnituretraini= ngcompany.com/images/email/highpoint_2009_04/spacer_content_right.jpg" a= lt=3D"" width=3D"30" height=3D"30" /><br /> <br /></td>=0D=0A</tr>=0D=0A= </tbody>=0D=0A</table>=0D=0A<table border=3D"0" cellspacing=3D"0" cellpa= dding=3D"0" width=3D"600" align=3D"center">=0D=0A<tbody>=0D=0A<tr>=0D=0A= <td valign=3D"top" bgcolor=3D"#ffffff"><img src=3D"http://www.furnituret= rainingcompany.com/images/email/highpoint_2009_04/disclaimer_divider.jpg= " alt=3D"" width=3D"600" height=3D"25" /><br /></td>=0D=0A</tr>=0D=0A</t= body>=0D=0A</table>=0D=0A<table border=3D"0" cellspacing=3D"0" cellpaddi= ng=3D"0" width=3D"600" align=3D"center">=0D=0A<tbody>=0D=0A<tr>=0D=0A<td= valign=3D"top" bgcolor=3D"#ffffff"><img src=3D"http://www.furnituretrai= ningcompany.com/images/email/highpoint_2009_04/spacer_disclaimer_left.jp= g" alt=3D"" /></td>=0D=0A<td valign=3D"top" bgcolor=3D"#ffffff"><img src= =3D"http://www.furnituretrainingcompany.com/images/email/highpoint_2009_= 04/spacer_disclaimer_middle.jpg" alt=3D"" /><br /> <span style=3D"font-s= ize: xx-small; font-family: Verdana,Arial,Helvetica,sans-serif; color: #= 666666;"><big><big><big><big><small><small><small>If you are not attendi= ng the High Point market in April but would still like to receive a free= Road Map to Sales Success poster visit us on the web at <span style=3D"= text-decoration: underline;"><a class=3D"moz-txt-link-abbreviated" href= =3D"http://www.furnituretrainingcompany.com">www.furnituretrainingcompan= y.com</a></span>, or to speak with a Furniture Training Company represen= tative, call toll free (866) 755-5996. We do not offer free shipping out= side of the U.S. and Canada. Retailers outside of the U.S. and Canada ma= y call for more information. Limit one free Road Map to Sales Success pe= r company. Other copies of the poster may be purchased on our web site.<= br /> <br /> </small></small></small></big></big></big></big></span> <sp= an style=3D"color: #666666;"><small><span style=3D"font-size: xx-small;= font-family: Verdana,Arial,Helvetica,sans-serif;"><big><big><big><small= ><small>We hope you found this message to be useful. However, if you'd r= ather not receive future emails of this sort from The Furniture Training= Company, please <a href=3D"http://www.furnituretraining.com/contact">cl= ick here to unsubscribe</a>.<br /> <br /> </small></small></big></big></= big></span></small><small><span style=3D"font-size: xx-small; font-famil= y: Verdana,Arial,Helvetica,sans-serif;"><big><big><big><small><small>&co= py;Copyright 2009 The Furniture Training Company.<br /> 1770 North Resea= rch Park Way, <br /> North Logan, UT 84341. <br /> All Rights Reserved.<= /small></small></big></big></big></span></small></span><br /></td>=0D=0A= <td valign=3D"top" bgcolor=3D"#ffffff"><img src=3D"http://www.furnituret= rainingcompany.com/images/email/highpoint_2009_04/spacer_disclaimer_righ= t.jpg" alt=3D"" /></td>=0D=0A</tr>=0D=0A</tbody>=0D=0A</table>=0D=0A<tab= le border=3D"0" cellspacing=3D"0" cellpadding=3D"0" width=3D"600" align= =3D"center">=0D=0A<tbody>=0D=0A<tr>=0D=0A<td valign=3D"top" bgcolor=3D"#= ffffff"><img src=3D"http://www.furnituretrainingcompany.com/images/email= /highpoint_2009_04/footer.jpg" alt=3D"" /></td>=0D=0A</tr>=0D=0A</tbody>= =0D=0A</table>=0D=0A<p><br /></p><br><hr><a href=3D'http://localhost/ftc= /app/unsubscribe.php?action=3DoptOut&pid=3D6121&cid=3D19&email=3Dmarkl@f= urnituretrainingcompany.com'>Click to Unsubscribe</a>

    Read the article

  • PHP5.3 is not working with MySQL5.1 IIS7 Times out

    - by Thorn007
    I have set up PHP5.3, MySQL5.1, and IIS7 on Window 7 but php doesn't want to work with MySQL. I'm assuming it is a configuration error or an incomplete install on my part. MySQL5.1 is working PHP5.3 is working, phpinfo() shows info and that i have enabled MySQL IIS is setup and using fastCgiModule to run PHP IIS registers php.ini updates port 3306 is firewall free and open to the world php.ini is configured correctly I have added c:\php to the Windows systems PATH In the past I remember moving a file, libmysql.dll, to System32 but I doesn't look like that come with php5.3.1, as the driver comes built in now http://us3.php.net/manual/en/mysqlnd.install.php. (This has been giving me so much trouble I have been documenting my findings on my blog as http://inteldesigner.com/2010/code/having-problems-getting-php5-3-to-work-with-mysql5-1 ) NEED: I need to install PHP manually, don't want to use the quick installer or an older version I need to get PHP5.3 to work with MySQL5.1 so i can install Wordpress2.9 and Drupal7a Any links or suggestion would be great, I have already done everything on the iis web site, nothing is working. I'm guessing they have not updated for new software. BUGS/SOLUTION: The solution is here: http://bugs.php.net/bug.php?id=50172 thanks go to don.raman on the iis.net forums http://forums.iis.net/p/1164911/1933894.aspx SYMPTOMS: The php function mysql_connect() in conjunction with php5.3 locks up sever and returns error 500. (IPv6 is the problem see above link) TEST CODE: <?php $con = mysql_connect("localhost","root","***"); if (!$con) { die('Could not connect: ' . mysql_error()); } // some code mysql_close($con); ?> ERRORS: From Browser: HTTP Error 500.0 - Internal Server Error C:\php\php-cgi.exe - The FastCGI process exceeded configured activity timeout When i run php -f c:\public_html\index.php from the command line i got: PHP Warning: mysql_connect(): [2002] A connection attempt failed because the co nnected party did not (trying to connect via tcp://localhost:3306) in C:\public _html\index.php on line 10 Warning: mysql_connect(): [2002] A connection attempt failed because the connect ed party did not (trying to connect via tcp://localhost:3306) in C:\public_html \index.php on line 10 PHP Warning: mysql_connect(): A connection attempt failed because the connected party did not properly respond after a period of time, or established connectio n failed because connected host has failed to respond. in C:\public_html\index.php on line 10 Warning: mysql_connect(): A connection attempt failed because the connected part y did not properly respond after a period of time, or established connection fai led because connected host has failed to respond. in C:\public_html\index.php on line 10 Could not connect: A connection attempt failed because the connected party did n ot properly respond after a period of time, or established connection failed bec ause connected host has failed to respond. C:\Users\Kevin>

    Read the article

  • Uploading a file using post() method of QNetworkAccessManager

    - by user304361
    I'm having some trouble with a Qt application; specifically with the QNetworkAccessManager class. I'm attempting to perform a simple HTTP upload of a binary file using the post() method of the QNetworkAccessManager. The documentation states that I can give a pointer to a QIODevice to post(), and that the class will transmit the data found in the QIODevice. This suggests to me that I ought to be able to give post() a pointer to a QFile. For example: QFile compressedFile("temp"); compressedFile.open(QIODevice::ReadOnly); netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), &compressedFile); What seems to happen on the Windows system where I'm developing this is that my Qt application pushes the data from the QFile, but then doesn't complete the request; it seems to be sitting there waiting for more data to show up from the file. The post request isn't "closed" until I manually kill the application, at which point the whole file shows up at my server end. From some debugging and research, I think this is happening because the read() operation of QFile doesn't return -1 when you reach the end of the file. I think that QNetworkAccessManager is trying to read from the QIODevice until it gets a -1 from read(), at which point it assumes there is no more data and closes the request. If it keeps getting a return code of zero from read(), QNetworkAccessManager assumes that there might be more data coming, and so it keeps waiting for that hypothetical data. I've confirmed with some test code that the read() operation of QFile just returns zero after you've read to the end of the file. This seems to be incompatible with the way that the post() method of QNetworkAccessManager expects a QIODevice to behave. My questions are: Is this some sort of limitation with the way that QFile works under Windows? Is there some other way I should be using either QFile or QNetworkAccessManager to push a file via post()? Is this not going to work at all, and will I have to find some other way to upload my file? Any suggestions or hints would be appreciated. Thanks, Don

    Read the article

  • Python form POST using urllib2 (also question on saving/using cookies)

    - by morpheous
    I am trying to write a function to post form data and save returned cookie info in a file so that the next time the page is visited, the cookie information is sent to the server (i.e. normal browser behavior). I wrote this relatively easily in C++ using curlib, but have spent almost an entire day trying to write this in Python, using urllib2 - and still no success. This is what I have so far: import urllib, urllib2 import logging # the path and filename to save your cookies in COOKIEFILE = 'cookies.lwp' cj = None ClientCookie = None cookielib = None logger = logging.getLogger(__name__) # Let's see if cookielib is available try: import cookielib except ImportError: logger.debug('importing cookielib failed. Trying ClientCookie') try: import ClientCookie except ImportError: logger.debug('ClientCookie isn\'t available either') urlopen = urllib2.urlopen Request = urllib2.Request else: logger.debug('imported ClientCookie succesfully') urlopen = ClientCookie.urlopen Request = ClientCookie.Request cj = ClientCookie.LWPCookieJar() else: logger.debug('Successfully imported cookielib') urlopen = urllib2.urlopen Request = urllib2.Request # This is a subclass of FileCookieJar # that has useful load and save methods cj = cookielib.LWPCookieJar() login_params = {'name': 'anon', 'password': 'pass' } def login(theurl, login_params): init_cookies(); data = urllib.urlencode(login_params) txheaders = {'User-agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'} try: # create a request object req = Request(theurl, data, txheaders) # and open it to return a handle on the url handle = urlopen(req) except IOError, e: log.debug('Failed to open "%s".' % theurl) if hasattr(e, 'code'): log.debug('Failed with error code - %s.' % e.code) elif hasattr(e, 'reason'): log.debug("The error object has the following 'reason' attribute :"+e.reason) sys.exit() else: if cj is None: log.debug('We don\'t have a cookie library available - sorry.') else: print 'These are the cookies we have received so far :' for index, cookie in enumerate(cj): print index, ' : ', cookie # save the cookies again cj.save(COOKIEFILE) #return the data return handle.read() # FIXME: I need to fix this so that it takes into account any cookie data we may have stored def get_page(*args, **query): if len(args) != 1: raise ValueError( "post_page() takes exactly 1 argument (%d given)" % len(args) ) url = args[0] query = urllib.urlencode(list(query.iteritems())) if not url.endswith('/') and query: url += '/' if query: url += "?" + query resource = urllib.urlopen(url) logger.debug('GET url "%s" => "%s", code %d' % (url, resource.url, resource.code)) return resource.read() When I attempt to log in, I pass the correct username and pwd,. yet the login fails, and no cookie data is saved. My two questions are: can anyone see whats wrong with the login() function, and how may I fix it? how may I modify the get_page() function to make use of any cookie info I have saved ?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >