Search Results

Search found 5177 results on 208 pages for 'fish shell'.

Page 184/208 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Concatenating gziped Apache logs

    - by markdrayton
    We rotate and compress our Apache logs each day but it's become apparent that this isn't frequently enough. An uncompressed log is about 6G, which is getting close to filling our log partition (yep, we'll make it bigger in the future!) as well as taking a lot of time and CPU to compress each day. We have to produce a gziped log for each day for our stats processing. Obviously we could move our logs to a partition with more space but I also want to spread the compression overhead throughout the day. Using Apache's rotatelogs we can rotate and compress the log more often -- hourly, say -- but how can I concatenate all the hourly compressed logs into a running compressed log for the day, without decompressing the previous logs? I don't want to uncompress 24 hours' worth of data and recompress it because that has all the disadvantages of our current solution. Gzip doesn't seem to offer any append or concatenate option but perhaps I've missed something obvious. This question suggests straight shell concatenation "works" in that the archive can be decompressed but that gzip -l doesn't work seems a bit dodgy. Alternatively, perhaps this is still a bad way to do things. Other suggestions are welcome -- our only constraints are our relatively small log partitions and the need to provide a daily compressed log.

    Read the article

  • UEFI boot options gone

    - by user1797930
    I ran into some issues booting Windows after trying to make a complete backup of the disc. After searching for information about some of the error codes, I found advise to change some BIOS settings, but instead I thought I would just "restore defaults" to make sure all settings were set as originally intended. After doing so, all UEFI boot options except for "Windows Boot Manager" are gone. That means, including the CD/DVD drive, so I cannot even boot from a recovery DVD anymore - and as explained, Windows is not able to boot either. Do you have any advice? When I added a secondary drive originally, it was automatically added to the boot options menu. Even when removing and re-adding the drive physically, the option does not appear again. I have tried unplugging power, and hold down start button for 10 seconds, and boot afterwards - no change. It's a laptop so removing CMOS battery is not an option. I have read information that it is an issue with data removed from NVRAM, but I am unable to find a way to recover it. "Add new boot options" requires a path - but the CD/DVD was originally available without any CD's in the drive - so there is no path available to add the drive. I did try to open EFI shell, but it seems not to be embedded in the UEFI/BIOS. It just says "not found". I'm really lost here - any advice is appreciated.

    Read the article

  • Permissions for Multiple User VPS

    - by adnymarc
    I have a Linode VPS server that I have recently setup and am migrating to from Mediatemple, where I have a VPS managed by Plesk. I dislike the Plesk interface and the mess it makes of a lot of things, but appreciated its ability to allow multiple people access to different domains on a server. I have most everything setup the way I would like it, but am having issues with permissions for my domain directories. I am running Ubuntu 8.04 LTS and Apache 2 as my web server. I have domains successfully located in /var/www/vhosts/domainname.com but have to modify files as root in order to add/change files for the domains. I would like to setup access with the following criteria: Each domain can have a user assigned to it (and allow for the same user to manage multiple domains - could even create symlinks in their home folder to their domains) Certain users will have shell access and may be chrooted to the domain directory they control FTP needs to be setup and able to correctly access the domains so that content editors for each domain can upload/download without permissions issues I am relatively new to linux sysadmin and have searched for a good guide to help solve these issues but haven't been able to find one yet. Thanks in advance for your help.

    Read the article

  • Convert Public Folder to Shared Mailbox

    - by Lilienthal
    Due to a change in company policy, all existing Public Folders (PF) have to be phased out in favour of shared mailboxes. Unfortunately, they don't seem to have any procedures or guidelines for this migration and I can't find much online either. I've already migrated one of our public folders so far as a sort of test case. Because we still use Exchange 2003, we can't create real shared mailboxes as we would in 2007 or 2010 (With New-Mailbox -Shared ... in the Exchange Shell). Instead, I simply created a new account on the AD and assigned it a mailbox. I then set the PF's permissions to read-only to keep it in a consistent state and copied the entire folder to a local PST in Outlook 2010, from which the folder was in turn copied to the new mailbox. Permissions and Folder Visible were set for all users and the migration was successful. While this works, the whole procedure feels very hackish to me and not at all efficient. I'd welcome some input on automating or at least streamlining the process. Additionally, we are unsure of what to do with our mail-enabled Public Folders. Several of these are nested under other PFs, some of which are also mail-enabled. Preserving folder structure is a key requirement and this seems impossible at first glance. I've considered creating dummy accounts for all the email addresses from our mail-enabled PFs and then setting up automated rules to forward messages to a subfolder of the new shared mailboxes, but I am not familiar enough with Exchange to know if this is even possible. Further points of concern are the Calendars and Contact lists in our public folders. I suppose I'll be forced to create new mailboxes for every one of these we have as well, then set up share permissions for their Calendar and Contact items, but would be happy to be proven wrong.

    Read the article

  • How do I install gfortran (via cygwin and etexteditor) and enable ifort under Windows XP?

    - by bez
    I'm a newbie in the Unix world so all this is a little confusing to me. I'm having trouble compiling some Fortran files under Cygwin on Windows XP. Here's what I've done so far: Installed the e text editor. Installed Cygwin via the "automatic" option inside e text editor. I need to compile some Fortran files so via the "manage bundles" option I installed the Fortran bundle as well. However, when I select "compile single file" I get an error saying gfortran was missing, and then that I need to set the TM_FORTRAN variable to the full path of my compiler. I tried opening a Cygwin bash shell at the path mentioned (.../bin/gfortran), but the compiler was nowhere to be found. Can someone tell me how to install this from the Cygwin command line? Where do I need to update the TM_FORTRAN variable for the bundle to work? Also, how do I change the bundle "compile" option to work with ifort (my native compiler) on Windows? I've read the bundle file, but it is totally incomprehensible to me. Ifort is a Windows compiler, invoked simply by ifort filename.f90, since it is on the Windows path. I know this is a lot to ask of a first time user here, but I really would appreciate any time you can spare to help.

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Unwanted blank lines when committing from SVN

    - by Alon_A
    I'm using CentOS Linux 5.8 as a web server and tortoise SVN for synchronizing version of our code. We write the code in Windows 7 professional 64BIT with NetBeans and NotePad++. I'm committing the code files (.php) from the Linux Shell by this command: svn co svnFolder serverFolder --username **** --password **** The problem is, after committing the files, when I'm opening them directly from the server (for debugging) by NotePad++ (I'm doing View/Edit from Filezilla) I have extra blank lines. A code that looks like this on the localhost (On NotePad++): private $producer; private $account; private $admin; private $producerEvents; private $accountProducers; private $adminAccounts; Will look like this after committing to the server (Again, on NotePad++): private $producer; private $account; private $admin; private $producerEvents; private $accountProducers; private $adminAccounts; If I upload files by FTP, No blank lines are being added. How can I solve it ? Thanks.

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • How to diagnose computer freezing problem

    - by reinierpost
    I have a laptop (a Medion from Aldi) that tends to hang quite often - so often, in fact, that several attempts to install Windows XP or Ubuntu on it have all failed. However, I am able to boot and run Ubuntu as found on the standard Ubuntu 10.10 installation image. I have done this two times thus far. The first time everything was running smoothly, until at some point the GUI (i.e. X) became unresponsive. The cursor kept moving with the mouse, but menus would no longer show and clicking things no longer produced any response. So I switched to the consoles (Ctrl-F1, Ctrl-F2, etc., which in this setup automatically run shells. The shells were still responsive, and the cd command would still work, but any command that invoked an executable (e.g. /bin/ls or cd /bin; ./find caused the shell to hang up uninterruptibly. My hypothesis was that all attempts at disk access were hanging up, but I didn't actually try a command like echo /proc/$$ or while read line; do echo $line; done < /var/log/syslog to verify this. Another possibility is that an essential system library is cached in memory and somehow failing to function properly. The second time I left the system running overnight and it didn't hang itself spontaneously. I'm not sure I have the patience to just twiddle with the running system until the condition reappears, and I'm, not sure what to do once it does. Clearly we can rule out a software cause. It seems disk access related, but clearly it's not permanent hard disk failure because the system will reboot just fine. What kind of hardware problem might produce these symptoms? Can it be a memory problem?

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • CodePlex Daily Summary for Tuesday, October 09, 2012

    CodePlex Daily Summary for Tuesday, October 09, 2012Popular ReleasesScript SQL Server Configuration: Release 3.0.9: Release 3.0.9 Rewrote trigger scripting. If encrypted triggers are encountered they are listed in a commented line. Scripts event notifications Added an option to script a single database (in addition to the instance) using the /scriptdb parameter. Script user-defined end points Script Service Broker objects Skip database mail on Express EditionMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.69: Fix for issue #18766: build task should not build the output if it's newer than all the input files. Fix for Issue #18764: build taks -res switch not working. update build task to concatenate input source and then minify, rather than minify and then concatenate. include resource string-replacement root name in the assumed globals list. Stop replacing new Date().getTime() with +new Date -- the latter is smaller, but turns out it executes up to 45% slower. add CSS support for single-...D3 Loot Tracker: 1.5.2: now recording server ip for each drop.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes NOTE:...DevLib: 69721 binary dll: 69721 binary dllVidCoder: 1.4.4 Beta: Fixed inability to create new presets with "Save As".MCEBuddy 2.x: MCEBuddy 2.3.2: Changelog for 2.3.2 (32bit and 64bit) 1. Added support for generating XBMC XML NFO files for files in the conversion queue (store it along with the source video with source video name.nfo). Right click on the file in queue and select generate XML 2. UI bugifx, start and end trim box locations interchanged 3. Added support for removing commercials from non DVRMS/WTV files (MP4, AVI etc) 4. Now checking for Firewall port status before enabling (might help with some firewall problems) 5. User In...Sandcastle Help File Builder: SHFB v1.9.5.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release suppor...Sofire Suite v1.6: XSqlModelGenerator.AddIn: 1、?? VS2010/2012 2、?? .NET FRAMEWORK 2.0 3、?? SOFIRE V1.6ClosedXML - The easy way to OpenXML: ClosedXML 0.68.0: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Json.NET: Json.NET 4.5 Release 10: New feature - Added Portable build to NuGet package New feature - Added GetValue and TryGetValue with StringComparison to JObject Change - Improved duplicate object reference id error message Fix - Fixed error when comparing empty JObjects Fix - Fixed SecAnnotate warnings Fix - Fixed error when comparing DateTime JValue with a DateTimeOffset JValue Fix - Fixed serializer sometimes not using DateParseHandling setting Fix - Fixed error in JsonWriter.WriteToken when writing a DateT...Readable Passphrase Generator: KeePass Plugin 0.7.2: Changes: Tested against KeePass 2.20.1 Tested under Ubuntu 12.10 (and KeePass 2.20) Added GenerateAsUtf8 method returning the encrypted passphrase as a UTF8 byte array.patterns & practices: Prism: Prism for .NET 4.5: This is a release does not include any functionality changes over Prism 4.1 Desktop. These assemblies target .NET 4.5. These assemblies also were compiled against updated dependencies: Unity 3.0 and Common Service Locator (Portable Class Library).Snoop, the WPF Spy Utility: Snoop 2.8.0: Snoop 2.8.0Announcing Snoop 2.8.0! It's been exactly six months since the last release, and this one has a bunch of goodies in it. In particular, there is now a PowerShell scripting tab, compliments of Bailey Ling. With this tab, the possibilities are limitless. It basically lets you automate/script the application that you are Snooping. Bailey has a couple blog posts (one and two) on his tab already, and I am sure more is to come. Please note that if you do not have PowerShell installed, y...Z3: Z3 4.1.2: Minor fixes. Now, z3 compiles with gcc 4.7.x.NET Micro Framework: .NET MF 4.3 (Beta): This is the 4.3 Beta version of the .NET Micro Framework. Feature List for v4.3 Support for Visual Studio 2012 (including the Windows Desktop Express version) All v4.2 QFEs features and bug fixes (PWM enhancements, lwIP and network driver reliability improvements, Analog Output, WinUSB and latest GCC support) Improved diagnostic information for deployment Decreased boot time Bug fixes Work Item 1736 - Create link for MFDeploy under start menu Work Item 1504 - Customizing lwIP o...NTCPMSG: V1.2.0.0: Allocate an identify cableid for each single connection cable. * Server can asend to specified cableid directly.Team Foundation Server Word Add-in: Version 1.0.12.0622: Welcome to the Visual Studio Team Foundation Server Word Add-in Supported Environments Microsoft Office Word 2007 and 2010 X86 (32-bit) Team Foundation Server 2010 Object Model TFS 2010, 2012 and TFS Service supported, using TFS OM / Explorer 2010. Quality-Bar Details Tool has been reviewed by Visual Studio ALM Rangers Tool has been through an independent technical and quality review All critical bugs have been resolved Known Issues / Bugs WI#43553 - The Acceptance criteria is not pu...UMD????? - PC?: UMDEditor?????V2.7: ??:http://jianyun.org/archives/948.html =============================================================================== UMD??? ???? =============================================================================== 2.7.0 (2012-10-3) ???????“UMD???.exe”??“UMDEditor.exe” ?????????;????????,??????。??????,????! ??64????,??????????????bug ?????????????,???? ???????????????? ???????????????,??????????bug ------------------------------------------------------- ?? reg.bat ????????????。 ????,??????txt/u...Untangler: Untangler 1.0.0: Add a missing file from first releaseNew Projects3dBuzz: Denise's website for 3d projection mapping artists to develop a web community to show and discuss their work.Advanced DataGridView with Excel-like auto filter: Windows Forms DataGridView Control with Excel-Like auto-filter context menu Windows Forms DataGridView ??????? ? Excel-???????? ????-????????azure media services admin panel: Simple azure media services dashboard. Upload media assets, queue encoder tasks and stream your audio\video assets.BackupCleaner.Net: A C#.Net-based tool for automatically removing old backups selectively. It can be used when you already make daily backups on disk, and want to clean them up while still keeping some of the ones made weeks, months or years ago.Bible Lib: BibleLib is a .net compatible library containing information for books, chapters and verses in the Old Testament.CRM 4.0 to CRM 2011 Queues Upgrades: for more information and documentation, please refer to http://mayankp.wordpress.com/2012/05/25/crm-4-0-to-crm-2011-queues-upgrades/ CSV File Reader and Writer for .NET: C# classes for reading and writing CSV files. Support for multi-line fields, custom delimiter and quote characters, options for how empty lines are handled.DotNetNuke Boards: DNN Boards is a task management solution that allows each user to have their own task board or social group members can collaborate within a single board.FarmaciasCruzAzul: Sistema Cruz AzulFindValueInDatabase: This solution finds the value you input in the hole given database and tells you which columns of which tables have the value you are looking for.Fish Tank: Social networking site for Aquarium enthusiasts.GSBA Apps: GSBA App for Windows 8Heuristics for the Vehicle Routing Problem: This is the code for our class, IEMS 482.Icaro - UPN: Icaro UPN es la recopilación de todos los preyectos realizados en clases de los diferentes proyectos que Enseño, espero les Sirva.JSLogger - free logging library for JavaScript: JSLogger is a free JavaScript Library for log information during the duration of your client script. The target is very easy >>Find every javasvript error<<Just little strategy: Just little strategy writen in C# using XNA Game Studio Now under developmentLightweight Medata Reader: The Lightweight Metadata Reader (LMR) is a tooling friendly version of the CLR Reflection APIs that takes no dependency on the CLR Loader.My Solution: No description for it nowoden????: ???????????????^^ooaavee.net: TODOPath Splitter: Path Splitter uses Roslyn to convert a method into a set of methods each equivalent to a distinct execution path. Assume annotations are added for use with Pex.Planning Poker for Azure: Planning Poker application allows distributed teams to play planning poker just using web browser. It can be deployed to IIS or Azure cloud service.plasmatrim.net: A library for controlling the USB PlasmaTrim from a .net application.powersaver: A simple utility, which will turn off your monitor when you lock your work station.Project13251008: gterProject13261008: asdsaProject13271008: sdfPyfus Reload: Server-side framework for Dofus 1.29.1 gameRei do Biscuit: E-commerce do rei do biscuitSofire Suite v1.6: Sofire Suite v1.6Sofire XSql: Sofire XSqlSosa Analysis Module (SAM): Numerical Analysis and data visualization program for SOSA psychological experiment software.SyncProject: The summary is coming soon... Just be patient ! Tistory Syntax Highlighter: ???? SyntaxHighlighter ????TJBHJJ: this is a website for company!tricogol App: nonetuts: My first SVNWatchr Change Control Management: Change control management for development and administrative teams focused on lean or agile processes.WCF Simple multi chat: This is a simple Windows form application that it's like a 'chat room'. Multiple users can login and chat with others users. The chat's core is powered by WCFWindows Event Log Email Notification: This will read the windows event logs based on the supplied xpath filters and email the specified addresses if there are matches.

    Read the article

  • Core Data error when assigning variable with one-to-one relationship

    - by Hoang Pham
    I tried to assign a managed object (C) with its property another managed object (B) (a one-to-one relationship) in which this other managed object (B) has a to-many relationship with one other managed object (A). There is an error from this assignment in which I copied as follows: #0 0x020e53a7 in ___forwarding___ #1 0x020c16c2 in __forwarding_prep_0___ #2 0x02078988 in CFRetain #3 0x0207a728 in CFSetAddValue #4 0x020c2fb2 in CFSetCreate #5 0x01e51ce8 in -[_NSFaultingMutableSet copyWithZone:] #6 0x020afcca in -[NSObject copy] #7 0x01e50d22 in -[NSManagedObject(_NSInternalMethods) _newPropertiesForRetainedTypes:andCopiedTypes:preserveFaults:] #8 0x01e51aa0 in -[NSManagedObject(_NSInternalMethods) _newAllPropertiesWithRelationshipFaultsIntact__] #9 0x01e519b4 in -[NSManagedObjectContext(_NSInternalChangeProcessing) _establishEventSnapshotsForObject:] #10 0x01e51866 in _PFFastMOCObjectWillChange #11 0x01e516c5 in _PF_ManagedObject_WillChangeValueForKeyIndex #12 0x01e51525 in _sharedIMPL_setvfk_core #13 0x01e51483 in _PF_Handler_Public_SetProperty #14 0x01e546d1 in -[NSManagedObject(_NSInternalMethods) _didChangeValue:forRelationship:named:withInverse:] #15 0x0030ec1e in NSKVONotify #16 0x002aae2a in -[NSObject(NSKeyValueObserverNotification) didChangeValueForKey:] #17 0x01e5212f in _PF_ManagedObject_DidChangeValueForKeyIndex #18 0x01e515b1 in _sharedIMPL_setvfk_core #19 0x01e55827 in _svfk_5 I don't understand very well what the exact description of this error is. Can someone explain to me what it is and how to solve this one. Note that all other assignments in which the managed object B does not have any A items do not raise this error. ObjectC *objectC = [NSEntityDescription insertNewObjectForEntityForName:@"ObjectC" inManagedObjectContext:managedObjectContext]; objectC.objectB = objectB; Thank you in advance. I added some more NSZombieEnabled/MallocStackLogging generated log: 2010-05-18 17:28:05.327 Foo[2069:207] *** -[CFSet retain]: message sent to deallocated instance 0x800c880 (gdb) shell malloc_history 207 0x800c880 malloc_history cannot examine process 207 because the process does not exist. (gdb) shell malloc_history 2069 0x800c880 ALLOC 0x800c880-0x800c884 [size=5]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlIOParseDTD | _endElementNs | -[Parser parser:didEndElement:namespaceURI:qualifiedName:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asl_set_query | strdup | malloc | malloc_zone_malloc ---- FREE 0x800c880-0x800c884 [size=5]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlIOParseDTD | _endElementNs | -[Parser parser:didEndElement:namespaceURI:qualifiedName:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asl_free | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asl_set_query | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asl_set_query | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | free ALLOC 0x800c700-0x800c893 [size=404]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlIOParseDTD | _startElementNs | -[Parser parser:didStartElement:namespaceURI:qualifiedName:attributes:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | CFCalendarDecomposeAbsoluteTime | _CFCalendarDecomposeAbsoluteTimeV | __CFCalendarSetupCal | __CFCalendarCreateUCalendar | ucal_open | icu::Calendar::createInstance(icu::TimeZone*, icu::Locale const&, UErrorCode&) | malloc | malloc_zone_malloc ---- FREE 0x800c700-0x800c893 [size=404]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlIOParseDTD | _startElementNs | -[Parser parser:didStartElement:namespaceURI:qualifiedName:attributes:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | _CFRelease | free ALLOC 0x800c880-0x800c8c7 [size=72]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlIOParseDTD | _startElementNs | -[Parser parser:didStartElement:namespaceURI:qualifiedName:attributes:] | +[NSEntityDescription insertNewObjectForEntityForName:inManagedObjectContext:] | +[NSManagedObject(_PFDynamicAccessorsAndPropertySupport) allocWithEntity:] | _PFAllocateObject | malloc_zone_calloc ---- FREE 0x800c880-0x800c8c7 [size=72]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __CFRunLoopDoObservers | _performRunLoopAction | -[_PFManagedObjectReferenceQueue _processReferenceQueue:] | _PFDeallocateObject | malloc_zone_free ALLOC 0x800c880-0x800c8a7 [size=40]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __CFRunLoopDoObservers | CA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*) | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerDisplayIfNeeded | -[TileLayer display] | -[CALayer _display] | CABackingStoreUpdate | backing_callback(CGContext*, void*) | WebCore::TiledSurface::drawLayer(CALayer*, CGContext*) | WKWindowDrawRect | WKViewDisplayRect | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | -[WebHTMLView drawSingleRect:] | -[WebFrame(WebInternal) _drawRect:contentsOnly:] | WebCore::FrameView::paintContents(WebCore::GraphicsContext*, WebCore::IntRect const&) | WebCore::RenderLayer::paint(WebCore::GraphicsContext*, WebCore::IntRect const&, WebCore::PaintRestriction, WebCore::RenderObject*) | WebCore::RenderLayer::paintLayer(WebCore::RenderLayer*, WebCore::GraphicsContext*, WebCore::IntRect const&, bool, WebCore::PaintRestriction, WebCore::RenderObject*, bool, bool) | WebCore::RenderLayer::paintLayer(WebCore::RenderLayer*, WebCore::GraphicsContext*, WebCore::IntRect const&, bool, WebCore::PaintRestriction, WebCore::RenderObject*, bool, bool) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderFlow::paintLines(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RootInlineBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::InlineFlowBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::InlineTextBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::paintTextWithShadows(WebCore::GraphicsContext*, WebCore::Font const&, WebCore::TextRun const&, int, int, WebCore::IntPoint const&, int, int, int, int, WebCore::ShadowData*, bool) | WebCore::GraphicsContext::drawText(WebCore::Font const&, WebCore::TextRun const&, WebCore::IntPoint const&, int, int) | WebCore::Font::drawSimpleText(WebCore::GraphicsContext*, WebCore::TextRun const&, WebCore::FloatPoint const&, int, int) const | WebCore::Font::drawGlyphBuffer(WebCore::GraphicsContext*, WebCore::GlyphBuffer const&, WebCore::TextRun const&, WebCore::FloatPoint&) const | WebCore::Font::drawGlyphs(WebCore::GraphicsContext*, WebCore::SimpleFontData const*, WebCore::GlyphBuffer const&, int, int, WebCore::FloatPoint const&, bool) const | CGGStateSetFont | maybeCopyTextState | calloc | malloc_zone_calloc ---- FREE 0x800c880-0x800c8a7 [size=40]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __CFRunLoopDoObservers | CA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*) | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerDisplayIfNeeded | -[TileLayer display] | -[CALayer _display] | CABackingStoreUpdate | backing_callback(CGContext*, void*) | WebCore::TiledSurface::drawLayer(CALayer*, CGContext*) | WKWindowDrawRect | WKViewDisplayRect | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | -[WebHTMLView drawSingleRect:] | -[WebFrame(WebInternal) _drawRect:contentsOnly:] | WebCore::FrameView::paintContents(WebCore::GraphicsContext*, WebCore::IntRect const&) | WebCore::RenderLayer::paint(WebCore::GraphicsContext*, WebCore::IntRect const&, WebCore::PaintRestriction, WebCore::RenderObject*) | WebCore::RenderLayer::paintLayer(WebCore::RenderLayer*, WebCore::GraphicsContext*, WebCore::IntRect const&, bool, WebCore::PaintRestriction, WebCore::RenderObject*, bool, bool) | WebCore::RenderLayer::paintLayer(WebCore::RenderLayer*, WebCore::GraphicsContext*, WebCore::IntRect const&, bool, WebCore::PaintRestriction, WebCore::RenderObject*, bool, bool) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderFlow::paintLines(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RootInlineBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::InlineFlowBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::InlineTextBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::paintTextWithShadows(WebCore::GraphicsContext*, WebCore::Font const&, WebCore::TextRun const&, int, int, WebCore::IntPoint const&, int, int, int, int, WebCore::ShadowData*, bool) | WebCore::GraphicsContext::restorePlatformState() | CGContextRestoreGState | CGGStackRestore | CGGStateRelease | textStateRelease | free ALLOC 0x800c880-0x800c8bf [size=64]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | CA::timer_callback(__CFRunLoopTimer*, void*) | run_animation_callbacks(double, void*) | -[UIViewAnimationState animationDidStop:finished:] | -[UIViewAnimationState sendDelegateAnimationDidStop:finished:] | -[UINavigationTransitionView _navigationTransitionDidStop] | -[UIView(Hierarchy) removeFromSuperview] | -[UITextField resignFirstResponder] | -[UIFieldEditor resignFirstResponder] | -[UIKeyboardImpl setDelegate:] | -[UIKeyboardImpl setDelegate:force:] | -[UITextInteractionAssistant setGestureRecognizers] | -[UITextInteractionAssistant addTwoFingerRangedSelectRecognizer] | -[UILongPressGestureRecognizer initWithTarget:action:] | -[__NSPlaceholderSet init] | -[__NSPlaceholderSet initWithCapacity:] | __CFSetInit | _CFRuntimeCreateInstance | malloc_zone_malloc

    Read the article

  • Content display problems when using Suckerfish menus with 960.gs and IE

    - by Cedar Jensen
    I'm using 960.gs layout and when I add the suckerfish menu as part of the content to one of the grids, the contents of adjacent siblings bleed through the menu in all versions of IE. In the listed html below, the text from 'belowFoldSection' will appear through the menu when it is visible and has enough items to make it span over 2nd section. However, the contents of 'introSummary' will be underneath the menu, as expected. I've set the z-index for #nav and #nav ul in my css and this of course makes it work in FF, Chrome and Safari, but not in IE (because IE incorrectly assigns child elements its own z-index). If I change the .grid_nn class 'position' attribute (set by default in the 960 template) from 'relative' to absolute, this fixes it in IE. However, it is my understanding that I don't want the child elements of the 'container_12' to be taken out of the flow of the document and want them positioned relative to the .container_12's starting point. (Changing the attribute to absolute causes other general layout problems) Can anyone suggest a work-around? My html: <div class="container_12"> <!--First section where menu lives--> <div class="grid_12" id="mainSection"> <div class="grid_4 alpha" id="intro"> <p>Start of menu here</p> <div id="subMenu"> <ul id="nav"> <li><a href="#">Item 1</a> <ul> <li><a href="#">Burrowing gobies</a></li> <li><a href="#">Dartfishes</a></li> <li><a href="#">Eellike gobies</a></li> <!--10 more for longer list --> </ul> </li> <li><a href="#">Item 2</a> <ul> <li><a href="#">Remoras</a></li> <li><a href="#">Tilefishes</a></li> <!--10 more for longer list --> </ul> </li> <li><a href="#">Item 3</a> <ul> <li><a href="#">Climbing perches</a></li> <li><a href="#">Labyrinthfishes</a></li> <li><a href="#">Kissing gouramis</a></li> <!--10 more for longer list --> </ul> </li> </ul> <div id="introSummary"> <h1>PERCIFORMES! (1)</h1> <p>Welcome to the world of Perciformes - perch-like fish including the world famous <strong>Suckerfish</strong></p> </div> </div> <!-- end of sub menu --> </div> <div class="grid_8 omega" id="summary"> <p>Some stuff goes here</p </div> </div> <!-- End of first section --> <div class="clear">&nbsp;</div> <div class="grid_12 spacer"> </div> <div class="grid_4" id="belowFoldSection"> <p>Here is some stuff I want to appear below the menu when the pop-up is visible</p> </div> </div> <!-- container_12 --> The suckerfish css file: #nav, #nav ul { /* all lists */ padding: 0; margin: 0; list-style: none; line-height: 1; z-index: 99; } #nav a { display: block; width: 10em; } #nav li { /* all list items */ float: left; width: 10em; } #nav li ul { /* second-level lists */ position: absolute; background: orange; width: 10em; left: -999em; } #nav li:hover ul, #nav li.sfhover ul { /* lists nested under hovered list items */ left: auto; } Default 960.gs css: .container_12, .container_16 { margin-left: auto; margin-right: auto; width: 960px; } .grid_1, .grid_2, .grid_3, .grid_4, .grid_5, .grid_6, .grid_7, .grid_8, .grid_9, .grid_10, .grid_11, .grid_12, .grid_13, .grid_14, .grid_15, .grid_16 { display: inline; float: left; position: relative; margin-left: 10px; margin-right: 10px; }

    Read the article

  • Policy based design and defaults.

    - by Noah Roberts
    Hard to come up with a good title for this question. What I really need is to be able to provide template parameters with different number of arguments in place of a single parameter. Doesn't make a lot of sense so I'll go over the reason: template < typename T, template <typename,typename> class Policy = default_policy > struct policy_based : Policy<T, policy_based<T,Policy> > { // inherits R Policy::fun(arg0, arg1, arg2,...,argn) }; // normal use: policy_base<type_a> instance; // abnormal use: template < typename PolicyBased > // No T since T is always the same when you use this struct custom_policy {}; policy_base<type_b,custom_policy> instance; The deal is that for many abnormal uses the Policy will be based on one single type T, and can't really be parameterized on T so it makes no sense to take T as a parameter. For other uses, including the default, a Policy can make sense with any T. I have a couple ideas but none of them are really favorites. I thought that I had a better answer--using composition instead of policies--but then I realized I have this case where fun() actually needs extra information that the class itself won't have. This is like the third time I've refactored this silly construct and I've got quite a few custom versions of it around that I'm trying to consolidate. I'd like to get something nailed down this time rather than just fish around and hope it works this time. So I'm just fishing for ideas right now hoping that someone has something I'll be so impressed by that I'll switch deities. Anyone have a good idea? Edit: You might be asking yourself why I don't just retrieve T from the definition of policy based in the template for default_policy. The reason is that default_policy is actually specialized for some types T. Since asking the question I have come up with something that may be what I need, which will follow, but I could still use some other ideas. template < typename T > struct default_policy; template < typename T, template < typename > class Policy = default_policy > struct test : Policy<test<T,Policy>> {}; template < typename T > struct default_policy< test<T, default_policy> > { void f() {} }; template < > struct default_policy< test<int, default_policy> > { void f(int) {} }; Edit: Still messing with it. I wasn't too fond of the above since it makes default_policy permanently coupled with "test" and so couldn't be reused in some other method, such as with multiple templates as suggested below. It also doesn't scale at all and requires a list of parameters at least as long as "test" has. Tried a few different approaches that failed until I found another that seems to work so far: template < typename T > struct default_policy; template < typename T, template < typename > class Policy = default_policy > struct test : Policy<test<T,Policy>> {}; template < typename PolicyBased > struct fetch_t; template < typename PolicyBased, typename T > struct default_policy_base; template < typename PolicyBased > struct default_policy : default_policy_base<PolicyBased, typename fetch_t<PolicyBased>::type> {}; template < typename T, template < typename > class Policy > struct fetch_t< test<T,Policy> > { typedef T type; }; template < typename PolicyBased, typename T > struct default_policy_base { void f() {} }; template < typename PolicyBased > struct default_policy_base<PolicyBased,int> { void f(int) {} };

    Read the article

  • PHP: How to automate building a 100 <UL>/<LI> menuitems, while keeping the Menu Structure File Flat / Simply Managable?

    - by Sam
    Above: current "stupid" menu. (entire ul/li menu for javascript menu system) + (some li lines as page-specific submenu) Hi folks! With passion for automation and elegancy, but limited knowledge/knowhow, im stuck with "my hands in my hair" as we Dutch say, for my current menu system works perfectly, but is a pain in the a*s to update! So, i would appreciate it greatly, if you can suggest how to automate this in php: how to let the php generate the html menu code basing on a flat menu input file with TABS indented. OLD SITUATION <ul> <!-- about 100 of these <li>....</li> lines --> <li><a href="carrot.php"><p class="mnu" style="background-position:0 -820px"><? echo __("carrot juice") ?></p></a></li> <!-- lots of data, with only little bit thats really the menu itself--> </ul a javascript file reads a ul/li structure as input to build menu of format in that ul/li, the items with a hyperlink and sprite-bg position represent webpages, (inside LI) while items without hyperlink and sprite-bg are just headers of that menusection, (inside H6) to highlight the current page in the menu, the javascript menumaker uses an id number. this number corresponds to the consequtive li that is a webpage, skips h6 headers correctly. these h6 headers are only there for when importing sections of the same menu as submenu. non-li headers are not shown in menu, nore counted by the javascript menu for their ID. to know which page should be shown, i have to count from ID 0, the li items till finding the current webpage in the li structure and then manually put it in each webpage! BUT: changing an item in li order, means stupidly re-counting their entire li again! each webpage has an icon (= sprite bg-position numer), which is also used in the webpage. INTENDED RESULT I dream of, once setting what the current webpage is (e.g carrot.php) the menu system automatically "finds" and "counts" the li's and returns the id nr (for proper highlight of main menu); generates the entire menu html, and depending on which headings are set for submenu, (e.g. meals, drinks) generates those submenu (entire section below each given header); ginally adds h5 highlight inside the li of that submenu item. For the menu, i wish an easily readable, simple plain txt menu that is indented with tabs, (each tab is one depth for example) and further tabs follow for url and sprite position of icon. MY DREAM MENU-MANAGEMENT FILE |>TAB SEPARATED/INDENTED FLATMENU FILE |MUST BE CALCULATED BY PHP: |>MENUTEXT============URL=============SPRITE=====|ID===TAG================== |>about "#" -520 |00 li |> INFORMATION |—— h6 |> physical state "physical.php" -920 |01 li |> mental health "mental.php" -10 |02 li |> |>apetite "#" -1290 |03 li |> meals "#" -600 |04 li |> COLD MEAL |—— h6 |> egg salade "salad.php" -1040 |05 li |> salmon fish "salmon.php" -540 |06 li |> HOT MEAL |—— h6 |> spare ribs "spareribs.php" -120 |07 li |> di macaroni "macaroni.php" -870 |08 li |> |> drinks "#" -230 |09 li |> JUCY DRINK |—— h6 |> carrot juice "carrot.php" -820 |10 li |> mango hive "mango.php" -270 |11 li DESIRED CHRONOLOGY php outputs the entire ul/li html so the javascript can show the menu: webpage items go inside li tags, and header items go inside h6 tags, e.g. <h6>JUCY DRINK</h6> Each website page has a url filename [eg: salad.php]. Based on this given fact, the php menu generator detects the pagename, gives the IDnr of the position of that page according to the li-item nr and sets variable for javascript to highlight current menu item. the menu items below the specified headers are loaded as submenu in which the current page.php is wrapped inside h5 to highlight current page in submenu: e.g. (<li><h5><a href="carrot.php"><p>..etc..</p></h5></li> Question Which methods / steps / (chronological)ways are there for doing this? I am no good in php programming, but am learning it so please dont write any code without a line of comment why I should use that method etc. Where do I start? If I am unclear in my question, please ask. Thanks. Much appreciated!! Concrete Task List from the provided Comments/Answers, sofar: (RobertB) First, get some PHP code working that can read through a tab-delimited file and put the data into an appropriate data structure. NOW WORKING AT THIS

    Read the article

  • SQLAuthority News – Guest Post – Performance Counters Gathering using Powershell

    - by pinaldave
    Laerte Junior Laerte Junior has previously helped me personally to resolve the issue with Powershell installation on my computer. He did awesome job to help. He has send this another wonderful article regarding performance counter for readers of this blog. I really liked it and I expect all of you who are Powershell geeks, you will like the same as well. As a good DBA, you know that our social life is restricted to a few movies over the year and, when possible, a pizza in a restaurant next to your company’s place, of course. So what we have to do is to create methods through which we can facilitate our daily processes to go home early, and eventually have a nice time with our family (and not sleeping on the couch). As a consultant or fixed employee, one of our daily tasks is to monitor performance counters using Perfmom. To be honest, IDE is getting more complicated. To deal with this, I thought a solution using Powershell. Yes, with some lines of Powershell, you can configure which counters to use. And with one more line, you can already start collecting data. Let’s see one scenario: You are a consultant who has several clients and has just closed another project in troubleshooting an SQL Server environment. You are to use Perfmom to collect data from the server and you already have its XML configuration files made with the counters that you will be using- a file for memory bottleneck f, one for CPU, etc. With one Powershell command line for each XML file, you start collecting. The output of such a TXT file collection is set to up in an SQL Server. With two lines of command for each XML, you make the whole process of data collection. Creating an XML configuration File to Memory Counters: Get-PerfCounterCategory -CategoryName "Memory" | Get-PerfCounterInstance  | Get-PerfCounterCounters |Save-ConfigPerfCounter -PathConfigFile "c:\temp\ConfigfileMemory.xml" -newfile Creating an XML Configuration File to Buffer Manager, counters Page lookups/sec, Page reads/sec, Page writes/sec, Page life expectancy: Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Page*" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" –NewFile Then you start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\ConfigfileMemory.xml -PathOutputFile c:\temp\ConfigfileMemory.txt To let the Buffer Manager collect, you need one more counters, including the Buffer cache hit ratio. Just add a new counter to BufferManager.xml, omitting the new file parameter Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Buffer cache hit ratio" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" And start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\BufferManager.xml -PathOutputFile c:\temp\BufferManager.txt You do not know which counters are in the Category Buffer Manager? Simple! Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters Let’s see one output file as shown below. It is ready to bulk insert into the SQL Server. As you can see, Powershell makes this process incredibly easy and fast. Do you want to see more examples? Visit my blog at Shell Your Experience You can find more about Laerte Junior over here: www.laertejuniordba.spaces.live.com www.simple-talk.com/author/laerte-junior www.twitter.com/laertejuniordba SQL Server Powershell Extension Team: http://sqlpsx.codeplex.com/ Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Add-On, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Powershell

    Read the article

  • Add the Recycle Bin to Start Menu in Windows 7

    - by Matthew Guay
    Have you ever tried to open the Recycle Bin by searching for “recycle bin” in the Start menu search, only to find nothing?  Here’s a quick trick that will let you find the Recycle Bin directly from your Windows Start menu search. The Start menu search may be the best timesaver ever added to Windows.  In fact, we use it so much that it seems painful to manually search for a program when using Windows XP or older versions of Windows.  You can easily find files, folders, programs and more through the Start menu search in both Vista and Windows 7. However, one thing you cannot find is the recycle bin; if you enter this in the start menu search it will not find it. Here’s how to add the Recycle Bin to your Start menu search. What to do To access the Recycle Bin from the Start menu search, we need to add a shortcut to the start menu.  Windows includes a personal Start menu folder, and an All Users start menu folder which all users on the computer can see.  This trick only works in the personal Start menu folder. Open up an Explorer window (Simply click the Computer link in the start menu), click the white part of the address bar, and, enter the following (substitute your username for your_user_name) and hit Enter. C:\Users\your_user_name\AppData\Roaming\Microsoft\Windows\Start Menu Now, right-click in the folder, select New, and then click Shortcut. In the location box, enter the following: explorer.exe shell:RecycleBinFolder When you’ve done this, click Next. Now, enter a name for the shortcut.  You can enter Recycle Bin like the standard shortcut, or you could name it something else such as Trash…if that’s easier for you to remember.  Click Finish when your done. By default it will have a folder icon.  Let’s switch that to the standard Recycle Bin icon.  Right-click on the new shortcut and click Properties. Click Change Icon… Type the following in the “Look for icons in this file:” box, and press the Enter key on your keyboard: %SystemRoot%\system32\imageres.dll Now, scroll and find the Recycle Bin icon and click Ok. Click Ok in the previous dialog, and now your Recycle Bin shortcut has the correct icon.   You can even have multiple shortcuts with different names, so when you searched either Recycle Bin or Trash it would come up in the Start menu.  To do that, simply repeat these directions, and enter another name of your choice at the prompt.  Here we have both a Recycle Bin and a Trash icon. Now, when you enter Recycle Bin (or trash, depending on what you chose) in your Start menu search, you will see it at the top of your Start menu.  Simply press Enter or click on the icon to open the Recycle Bin.   This trick will work in Windows Vista too!  Simply follow these same directions, and you can add the Recycle Bin to your Vista Start menu and find it via search. This is a simple trick, but may make it  much easier for you to open your Recycle Bin directly from your Windows Vista or 7 Start menu search.  If you’re using Windows 7, you can also check out our directions on how to Add the Recycle Bin to the Taskbar in Windows 7. Similar Articles Productive Geek Tips Hide, Delete, or Destroy the Recycle Bin Icon in Windows 7 or VistaDisable Deletion of the Recycle Bin in Windows VistaHide the Recycle Bin Icon Text on Windows VistaAdd the Recycle Bin to the Taskbar in Windows 7Resize the Recycle Bin in XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader

    Read the article

  • PowerShell Control over Nikon D3000 Camera

    My wife got me a Nikon D3000 camera for Christmas last year, and Im loving it but still trying to wrap my head around some of its features.  For instance, when you plug it into a computer via USB, it doesnt show up as a drive like most cameras Ive used to, but rather it shows up as Computer\D3000.  After a bit of research, Ive learned that this is because it implements the MTP/PTP protocol, and thus doesnt actually let Windows mount the cameras storage as a drive letter.  Nikon describes the use of the MTP and PTP protocols in their cameras here. What Im really trying to do is gain access to the cameras file system via PowerShell.  Ive been using a very handy PowerShell script to pull pictures off of my cameras and organize them into folders by date.  Id love to be able to do the same thing with my Nikon D3000, but so far I havent been able to figure out how to get access to the files in PowerShell.  If you know, Id appreciate any links/tips you can provide.  All I could find is a shareware product called PTPdrive, which Im not prepared to shell out money for (yet).  (and yes you can do much the same thing with Windows 7s Import Pictures and Videos wizard, which is pretty good too) However, in my searching, I did find some really cool stuff you can do with PowerShell and one of these cameras, like actually taking pictures via PowerShell commands.  Credit for this goes to James ONeill and Mark Wilson.  Heres what I was able to do: Taking Pictures via PowerShell with D3000 First, connect your camera, turn it on, and launch PowerShell.  Execute the following commands to see what commands your device supports.  $dialog = New-Object -ComObject "WIA.CommonDialog" $device = $dialog.ShowSelectDevice() $device.Commands You should see something like this: Now, to take a picture, simply point your camera at something and then execute this command: $device.ExecuteCommand("{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Imagine my surprise when this actually took a picture (with auto-focus): Imagine what you could do with a camera completely under the control of your computer  Time-lapse photography would be pretty simple, for instance, with a very simple loop that takes a picture and then sleeps for a minute (or whatever time period).  Hooked up to a laptop for portability (and an A/C power supply), this would be pretty trivial to implement.  I may have to give it a shot and report back. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • This Week in Geek History: Gmail Goes Public, Deep Blue Wins at Chess, and the Birth of Thomas Edison

    - by Jason Fitzpatrick
    Every week we bring you a snapshot of the week in Geek History. This week we’re taking a peek at the public release of Gmail, the first time a computer won against a chess champion, and the birth of prolific inventor Thomas Edison. Gmail Goes Public It’s hard to believe that Gmail has only been around for seven years and that for the first three years of its life it was invite only. In 2007 Gmail dropped the invite only requirement (although they would hold onto the “beta” tag for another two years) and opened its doors for anyone to grab a username @gmail. For what seemed like an entire epoch in internet history Gmail had the slickest web-based email around with constant innovations and features rolling out from Gmail Labs. Only in the last year or so have major overhauls at competitors like Hotmail and Yahoo! Mail brought other services up to speed. Can’t stand reading a Week in Geek History entry without a random fact? Here you go: gmail.com was originally owned by the Garfield franchise and ran a service that delivered Garfield comics to your email inbox. No, we’re not kidding. Deep Blue Proves Itself a Chess Master Deep Blue was a super computer constructed by IBM with the sole purpose of winning chess matches. In 2011 with the all seeing eye of Google and the amazing computational abilities of engines like Wolfram Alpha we simply take powerful computers immersed in our daily lives for granted. The 1996 match against reigning world chest champion Garry Kasparov where in Deep Blue held its own, but ultimately lost, in a  4-2 match shook a lot of people up. What did it mean if something that was considered such an elegant and quintessentially human endeavor such as chess was so easy for a machine? A series of upgrades helped Deep Blue outright win a match against Kasparov in 1997 (seen in the photo above). After the win Deep Blue was retired and disassembled. Parts of Deep Blue are housed in the National Museum of History and the Computer History Museum. Birth of Thomas Edison Thomas Alva Edison was one of the most prolific inventors in history and holds an astounding 1,093 US Patents. He is responsible for outright inventing or greatly refining major innovations in the history of world culture including the phonograph, the movie camera, the carbon microphone used in nearly every telephone well into the 1980s, batteries for electric cars (a notion we’d take over a century to take seriously), voting machines, and of course his enormous contribution to electric distribution systems. Despite the role of scientist and inventor being largely unglamorous, Thomas Edison and his tumultuous relationship with fellow inventor Nikola Tesla have been fodder for everything from books, to comics, to movies, and video games. Other Notable Moments from This Week in Geek History Although we only shine the spotlight on three interesting facts a week in our Geek History column, that doesn’t mean we don’t have space to highlight a few more in passing. This week in Geek History: 1971 – Apollo 14 returns to Earth after third Lunar mission. 1974 – Birth of Robot Chicken creator Seth Green. 1986 – Death of Dune creator Frank Herbert. Goodnight Dune. 1997 – Simpsons becomes longest running animated show on television. Have an interesting bit of geek trivia to share? Shoot us an email to [email protected] with “history” in the subject line and we’ll be sure to add it to our list of trivia. Latest Features How-To Geek ETC Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? Clean Up Google Calendar’s Interface in Chrome and Iron The Rise and Fall of Kramerica? [Seinfeld Video] GNOME Shell 3 Live CDs for OpenSUSE and Fedora Available for Testing Picplz Offers Special FX, Sharing, and Backup of Your Smartphone Pics BUILD! An Epic LEGO Stop Motion Film [VIDEO] The Lingering Glow of Sunset over a Winter Landscape Wallpaper

    Read the article

  • Stream Media from Windows 7 to XP with VLC Media Player

    - by DigitalGeekery
    So you’ve got yourself a new computer with Windows 7 and you’re itching to take advantage of it’s ability to stream media across your home network. But, the rest of the family is still on Windows XP and you’re not quite ready to shell out the cash for the upgrades. Well, today we’ll show you how to easily stream media from Windows 7 to Windows XP with VLC Media Player. On the host computer running Windows 7, you’ll need to have an account set up with both a username and password. A blank password will not work. The media files will need to be located in a shared folder. Note: If the media files are located within the Public directory, or within the profile of the user account you use to log into the Windows 7 computer, they will be shared automatically. Sharing your Media Folders On your Windows 7 computer, right-click on the folder containing the files you’d like to stream and choose Properties.     On the Sharing Tab of the folder properties, click the Share button. Click OK.   Type or select from the drop down the user account you’ll use to log in, or select “Everyone” to share with all users. Then click Add. You may change the permission level, but only Read permission is required to play the media. Repeat this process for any additional folders you wish to share.    The Windows XP Client Computer Now that we’ve shared our media folders from the Windows 7 computer, we’re ready to play our files on the Windows XP computer. Download and install the VLC Media Player. (See link below) Then open VLC. Click on Media from the and select Open File… Browse your network for the shared folder that contains your media.   You’ll be prompted to log in to the host computer. Provide the credentials for a user on the Windows 7 computer. Click OK.   Select your media file and click Open.    Your media playback will begin momentarily.   This is a nice and easy way to stream media across your home network without upgrading multiple computers to Windows 7.  Plus, VLC is certainly no slouch as a Media Player. It’ll play virtually any video or audio file you can throw at it. Have you already upgraded all your home PCs to Windows 7? Check out our previous article on streaming media between Windows 7 computers on your home network. Download VLC Media Player Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesShare Digital Media With Other Computers on a Home Network with Windows 7Enable Media Streaming in Windows Home Server to Windows Media PlayerInstall and Use the VLC Media Player on Ubuntu LinuxInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images

    Read the article

  • Visual Studio 2010 Beta 2 Startup Failures

    - by Rick Strahl
    I’ve been working with VS 2010 Beta 2 for a while now and while it works Ok most of the time it seems the environment is very, very fragile when it comes to crashes and installed packages. Specifically I’ve been working just fine for days, then when VS 2010 crashes it will not re-start. Instead I get the good old Application cannot start dialog: Other failures I’ve seen bring forth other just as useful dialogs with information overload like Operation cannot be performed which for me specifically happens when trying to compile any project. After a bit of digging around and a post to Microsoft Connect the solution boils down to resetting the VS.NET environment. The Application Cannot Start issue stems from a package load failure of some sort, so the work around for this is typically: c:\program files\Visual Studio 2010\Common7\IDE\devenv.exe /ResetSkipPkgs In most cases that should do the trick. If it doesn’t and the error doesn’t go away the more drastic: c:\program files\Visual Studio 2010\Common7\IDE\devenv.exe /ResetSettings is required which resets all settings in VS to its installation defaults. Between these two I’ve always been able to get VS to startup and run properly. BTW it’s handy to keep a list of command line options for Visual Studio around: http://msdn.microsoft.com/en-us/library/xee0c8y7%28VS.100%29.aspx Note that the /? option in VS 2010 doesn’t display all the options available but rather displays the ‘demo version’ message instead, so the above should be helpful. Also note that unless you install Visual C++ the Visual Studio Command Prompt icon is not automatically installed so you may have to navigate manually to the appropriate folder above. Cannot Build Failures If you get the Cannot compile error dialog, there is another thing that have worked for me: Change your project build target from Debug to Release (or whatever – just change it) and compile again. If that doesn’t work doing the reset steps above will do it for me. It appears this failure comes from some sort of interference of other versions of Visual Studio installed on the system and running another version first. Resetting the build target explicitly seems to reset the build providers to a normalized state so that things work in many cases. But not all. Worst case – resetting settings will do it. The bottom line for working in VS 2010 has been – don’t get too attached to your custom settings as they will get blown away quite a bit. I’ve probably been through 20 or more of these VS resets although I’ve been working with it quite a bit on an internal project. It’s kind of frustrating to see this kind of high level instability in a Beta 2 product which is supposedly the last public beta they will put out. On the other hand this beta has been otherwise rather stable and performance is roughly equivalent to VS 2008. Although I mention the crash above – crashes I’ve seen have been relatively rare and no more frequent than in VS 2008 it seems. Given the drastic UI changes in VS 2010 (using WPF for the shell and editor) I’m actually impressed that the product is as stable as it is at this point. Also I was seriously worried about text quality going to a WPF model, but thankfully WPF 4.0 addresses the blurry text issue with native font rendering to render text on non-cleartype enabled systems crisply. Anyway I hope that these notes are helpful to some of you playing around with the beta and running into problems. Hopefully you won’t need them :-}© Rick Strahl, West Wind Technologies, 2005-2010

    Read the article

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    - by jamiet
    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it. A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project. The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server. In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT: This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE. http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/ Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team: The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine. I, like many others, would rather not have to install full blown Visual Studio and so I asked: Is there any chance you'll ever support any of these scenarios: Installation of all build/deploy pre-requisites without installing the VS shell? TFS shipping with all of the pre-requisites for doing SSDT project build/deploys 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that? Kevin replied again: The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc. Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc. What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that. Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you. So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead. @Jamiet

    Read the article

  • MongoDB usage best practices

    - by andresv
    The project I'm working on uses MongoDB for some stuff so I'm creating some documents to help developers speedup the learning curve and also avoid mistakes and help them write clean & reliable code. This is my first version of it, so I'm pretty sure I will be adding more stuff to it, so stay tuned! C# Official driver notes The 10gen official MongoDB driver should always be referenced in projects by using NUGET. Do not manually download and reference assemblies in any project. C# driver quickstart guide: http://www.mongodb.org/display/DOCS/CSharp+Driver+Quickstart Reference links C# Language Center: http://www.mongodb.org/display/DOCS/CSharp+Language+Center MongoDB Server Documentation: http://www.mongodb.org/display/DOCS/Home MongoDB Server Downloads: http://www.mongodb.org/downloads MongoDB client drivers download: http://www.mongodb.org/display/DOCS/Drivers MongoDB Community content: http://www.mongodb.org/display/DOCS/CSharp+Community+Projects Tutorials Tutorial MongoDB con ASP.NET MVC - Ejemplo Práctico (Spanish):http://geeks.ms/blogs/gperez/archive/2011/12/02/tutorial-mongodb-con-asp-net-mvc-ejemplo-pr-225-ctico.aspx MongoDB and C#:http://www.codeproject.com/Articles/87757/MongoDB-and-C C# driver LINQ tutorial:http://www.mongodb.org/display/DOCS/CSharp+Driver+LINQ+Tutorial C# driver reference: http://www.mongodb.org/display/DOCS/CSharp+Driver+Tutorial Safe Mode Connection The C# driver supports two connection modes: safe and unsafe. Safe connection mode (only applies to methods that modify data in a database like Inserts, Deletes and Updates. While the current driver defaults to unsafe mode (safeMode == false) it's recommended to always enable safe mode, and force unsafe mode on specific things we know aren't critical. When safe mode is enabled, the driver internal code calls the MongoDB "getLastError" function to ensure the last operation is completed before returning control the the caller. For more information on using safe mode and their implicancies on performance and data reliability see: http://www.mongodb.org/display/DOCS/getLastError+Command If safe mode is not enabled, all data modification calls to the database are executed asynchronously (fire & forget) without waiting for the result of the operation. This mode could be useful for creating / updating non-critical data like performance counters, usage logging and so on. It's important to know that not using safe mode implies that data loss can occur without any notification to the caller. As with any wait operation, enabling safe mode also implies dealing with timeouts. For more information about C# driver safe mode configuration see: http://www.mongodb.org/display/DOCS/CSharp+getLastError+and+SafeMode The safe mode configuration can be specified at different levels: Connection string: mongodb://hostname/?safe=true Database: when obtaining a database instance using the server.GetDatabase(name, safeMode) method Collection: when obtaining a collection instance using the database.GetCollection(name, safeMode) method Operation: for example, when executing the collection.Insert(document, safeMode) method Some useful SafeMode article: http://stackoverflow.com/questions/4604868/mongodb-c-sharp-safemode-official-driver Exception Handling The driver ensures that an exception will be thrown in case of something going wrong, in case of using safe mode (as said above, when not using safe mode no exception will be thrown no matter what the outcome of the operation is). As explained here https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/mS6jIq5FUiM there is no need to check for any returned value from a driver method inserting data. With updates the situation is similar to any other relational database: if an update command doesn't affect any records, the call will suceed anyway (no exception thrown) and you manually have to check for something like "records affected". For MongoDB, an Update operation will return an instance of the "SafeModeResult" class, and you can verify the "DocumentsAffected" property to ensure the intended document was indeed updated. Note: Please remember that an Update method might return a null instance instead of an "SafeModeResult" instance when safe mode is not enabled. Useful Community Articles Comments about how MongoDB works and how that might affect your application: http://ethangunderson.com/blog/two-reasons-to-not-use-mongodb/ FourSquare using MongoDB had serious scalability problems: http://mashable.com/2010/10/07/mongodb-foursquare/ Is MongoDB a replacement for Memcached? http://www.quora.com/Is-MongoDB-a-good-replacement-for-Memcached/answer/Rick-Branson MongoDB Introduction, shell, when not to use, maintenance, upgrade, backups, memory, sharding, etc: http://www.markus-gattol.name/ws/mongodb.html MongoDB Collection level locking support: https://jira.mongodb.org/browse/SERVER-1240 MongoDB performance tips: http://www.quora.com/MongoDB/What-are-some-best-practices-for-optimal-performance-of-MongoDB-particularly-for-queries-that-involve-multiple-documents Lessons learned migrating from SQL Server to MongoDB: http://www.wireclub.com/development/TqnkQwQ8CxUYTVT90/read MongoDB replication performance: http://benshepheard.blogspot.com.ar/2011/01/mongodb-replication-performance.html

    Read the article

  • A Few of My Favorite HTML5 and CSS3 Online Tools

    - by dwahlin
    I really enjoy coding up HTML5, CSS3, and JavaScript applications but there are some things that I’m better off writing with the help of a development tool. For example, CSS3 gradients aren’t exactly the most fun thing to write by hand and the same could be said for animations, transforms, or styles that require various vendor extensions. There are a lot of online tools that can simplify building HTML5/CSS3 sites and increase productivity in the process so I thought I’d put together a post on a few of my favorites tools. HTML5 Boilerplate HTML5 Boilerplate provides a great way to get started building HTML5 sites. It includes many best practices out of the box and even includes a few tricks that many people don’t even know about. The custom download option allows you to pick the features that you want to include in the files that’s generated. You can read more about it here.   Initializr Although HTML5 Boilerplate provides a great foundation for starting HTML5 sites, it focuses on providing a starting shell structure (namely an html page, JavaScript files, and a CSS stylesheet) and doesn’t include much in the way of page content to get started with. Initializer builds on HTML5 Boilerplate and provides an initial test page that can be tweaked to meet your needs. It also provides several different customization options to include/exclude features. CSS3 Maker CSS3 provides a lot of great features ranging from gradient support to rounded corners. Although many of the features are fairly straightforward there are some that are pretty involved such as gradients, animations, and really any styles that require custom vendor extensions to use across browsers. Sure, you can type everything by hand, but sites such as CSS3 Maker provide a visual way to generate CSS3 styles. CSS3, Please! CSS3, Please! is a code generation tool that can be used to generate cross-browser CSS3 styles quickly and easily. All of the main things you can do with CSS3 are available including a clever way to visually generate CSS3 transform styles.       Ultimate CSS Gradient Generator CSS3 Maker (above) has a gradient generator built-in but my favorite tool for creating CSS3 gradients is the Ultimate CSS Gradient Generator. If you’ve created gradients in tools like Photoshop then you’ll love what this tool has to offer especially since it makes it extremely straightforward to work with different gradient stops. @font-face Fonts Although @font-face has been available for awhile, I think fonts are cool and wanted to mention a site that provides a lot of font choices. When used correctly fonts can really enhance a page and when used incorrectly (think Comic Sans) they can absolutely ruin a page. Several sites exist that provide fonts that can be used with @font-face definitions in CSS style sheets. One of my favorites is Font Squirrel.   HTML5 & CSS3 Support and Tests Interested in knowing what HTML5 and CSS3 features a given browser supports? Want to know how various browsers stack up with each other as far as HTML5/CSS3 support. Look no further than the HTML5 & CSS3 Support page or the HTML5 Test page.   CSS3 Easing Animation Tool CSS3 animations aren’t widely supported across browsers right now (I’m not really using them at this point) but they do offer a lot of promise. Creating easings for animations can definitely be a challenge but they’re something that are critical for adding that “professional touch” to your animations. Fortunately you can use the Ceaser CSS Easing Animation Tool to simplify the process and handle animation easing with…...ease.   There are several other online tools that I like but these are some of the ones I find myself using the most. If you have any favorite online tools that simplify working with HTML5 or CSS3 let me know.     For more information about onsite or online training, mentoring and consulting solutions for HTML5, jQuery, .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >