Search Results

Search found 632 results on 26 pages for 'imports'.

Page 17/26 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Does @import in CSS result in additional http requests?

    - by Mo Boho
    I have an ecommerce site that has about 8 CSS files linked from the header - resulting in 8 separate http requests to the server. I consolidated all the CSS files into 1 big one, resulting in a 67kb (!) file - to cut down the http requests to 1 for our css files. I'm finding this size a CSS file a little unmanageable in light of the fact I'm performing updates on the site constantly. My concern is my users may catch me in the middle of updating and see a NON-styled page when moving from page to page - b/c 67kb still takes a good 2-3 seconds before it is successfully placed on the remote server via FTP. My question is: does the use of @import within this large CSS file to break up the files into smaller more manageable sizes (within that CSS file) take us back to the original 8 http-requests when the pages is loaded? Or are @imports in CSS handle differently somehow?

    Read the article

  • Preserving NULL values in a Double Variable

    - by Sam
    Hi, I'm working on a vb.net application which imports from an Excel spreadsheet. If rdr.HasRows Then Do While rdr.Read() If rdr.GetValue(0).Equals(System.DBNull.Value) Then Return Nothing Else Return rdr.GetValue(0) End If Loop Else I was using string value to store the double values and when preparing the database statement I'd use this code: If (LastDayAverage = Nothing) Then command.Parameters.AddWithValue("@WF_LAST_DAY_TAG", System.DBNull.Value) Else command.Parameters.AddWithValue("@WF_LAST_DAY_TAG", Convert.ToDecimal(LastDayAverage)) End If I now have some data with quite a few decimal places and the data was put into the string variable in scientific notation, so this seems to be the wrong approach. It didn't seem right using the string variable to begin with. If I use a double or decimal type variable, the blank excel values come across as 0.0. How can I preserve the blank values? Thanks

    Read the article

  • How to create make .so files from code written in c or c++ that are usable from python

    - by None
    Looking at python modules and at code in the "lib-dnyload" directory in the python framework, I noticed whenever code is creating some kind of gui or graphic it imports a non-python file with a .so extension. And there are tons .so files in "lib-dnyload". From googling things I found that these files are called shared objects and are written in c or c++. I have a mac and I use gcc. I want to know how to make shared object files that are accessible via python. Mainly just how to make shared objects with gcc using mac os x.

    Read the article

  • (Python) Extracting Text from Source Code?

    - by zhuyxn
    Currently have a large webpage whose source code is ~200,000 lines of almost all (if not all) HTML. More specifically, it is a webpage whose content is a few thousand blocks of paragraphs separated by line breaks (though a line break does not specifically mean there is a separation in content) My main objective is to extract text from the source code as if I were copying/pasting the webpage into a text editor. There is another parsing function I would like to use, which originally took in copied/pasted text rather than the source code. To do this, I'm currently using urllib2, and calling .get_text() in Beautiful Soup. The problem is, Beautiful Soup is leaving tremendous amounts of white space in my code, and it is difficult to pass the result into the second "text" parser. I have done quite a bit of research on parsing HTMLs, but I'm frankly not sure how to solve this problem easily. Furthermore, I'm a bit confused on how to use imports like lxml to extract text as if I were to simply copy and paste?

    Read the article

  • How to Specify AssemblyKeyFile Attribute in .NET Assembly and Issues

    How to specify strong key file in assembly? Answer: You can specify snk file information using following line [assembly: AssemblyKeyFile(@"c:\Key2.snk")] Where to specify an strong key file (snk file)? Answer: You have two options to specify the AssemblyKeyFile infromation. 1. In class 2. In AssemblyInfo.cs [assembly: AssemblyKeyFile(@"c:\Key2.snk")] 1. In Class you must specify above line before defining namespace of the class and after all the imports or usings Example: See Line 7 in bellow sample class using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Reflection;[assembly: AssemblyKeyFile(@"c:\Key1.snk")]namespace Csharp3Part1{ class Person { public string GetName() { return "Smith"; } }}2. In AssemblyInfo.cs You can aslo specify assembly information in AssemblyInfo.cs Example: See Line 16 in bellow sample AssemblyInfo.csusing System.Reflection;using System.Runtime.CompilerServices;using System.Runtime.InteropServices;// General Information about an assembly is controlled through the following// set of attributes. Change these attribute values to modify the information// associated with an assembly.[assembly: AssemblyTitle("Csharp3Part1")][assembly: AssemblyDescription("")][assembly: AssemblyConfiguration("")][assembly: AssemblyCompany("Deloitte")][assembly: AssemblyProduct("Csharp3Part1")][assembly: AssemblyCopyright("Copyright © Deloitte 2009")][assembly: AssemblyTrademark("")][assembly: AssemblyCulture("")][assembly: AssemblyKeyFile(@"c:\Key1.snk")]// Setting ComVisible to false makes the types in this assembly not visible// to COM components. If you need to access a type in this assembly from// COM, set the ComVisible attribute to true on that type.[assembly: ComVisible(false)]// The following GUID is for the ID of the typelib if this project is exposed to COM[assembly: Guid("4350396f-1a5c-4598-a79f-2e1f219654f3")]// Version information for an assembly consists of the following four values://// Major Version// Minor Version// Build Number// Revision//// You can specify all the values or you can default the Build and Revision Numbers// by using the '*' as shown below:// [assembly: AssemblyVersion("1.0.*")][assembly: AssemblyVersion("1.0.0.0")][assembly: AssemblyFileVersion("1.0.0.0")]Issues:You should not sepcify this in following ways. 1. In multiple classes. 2. In both class and AssemblyInfo.cs If you did wrong in either one of the above ways, Visual Studio or C#/VB.NET compilers shows following Error Duplicate 'AssemblyKeyFile' attribute and warning Use command line option '/keyfile' or appropriate project settings instead of 'AssemblyKeyFile' To avoid this, Please specity your keyfile information only one time either only in one class or in AssemblyInfo.cs file. It is suggested to specify this at AssemblyInfo.cs file You might also encounter the errors like Error: type or namespace name 'AssemblyKeyFileAttribute' and 'AssemblyKeyFile' could not be found. Solution. Please find herespan.fullpost {display:none;} span.fullpost {display:none;}

    Read the article

  • Software Architecture and MEF composition location

    - by Leonardo
    Introduction My software (a bunch of webapi's) consist of 4 projects: Core, FrontWebApi, Library and Administration. Library is a code library project that consists of only interfaces and enumerators. All my classes in other projects inherit from at least one interface, and this interface is in the library. Generally speaking, my interfaces define either Entities, Repositories or Controllers. This project references no other project or any special dlls... just the regular .Net stuff... Core is a class-library project where concrete implementation of Entities and Repositories. In some cases i have more than 1 implementation for a Repository (ex: one for azure table storage and one for regular Sql). This project handles the intelligence (business rules mostly) and persistence, and it references only the Library. FrontWebApi is a ASP.NET MVC 4 WebApi project that implements the controllers interfaces to handle web-requests (from a mobile native app)... It references the Core and the Library. Administration is a code-library project that represents a "optional-module", meaning: if it is present, it provides extra-features (such as Access Control Lists) to the application, but if its not, no problem. Administration is also only referencing the Library and implementing concrete classes of a few interfaces such as "IAccessControlEntry"... I intend to make this available with a "setup" that will create any required database table or anything like that. But it is important to notice that the Core has no reference to this project... Development Now, in order to have a decoupled code I decide to use IoC and because this is a small project, I decided to do it using MEF, specially because of its advertised "composition" capabilities. I arranged all the imports/exports and constructors and everything, but something is quite not perfect in my "mental-visualisation": Main Question Where should I "Compose" the objects? I mean: Technically, the only place where real implementation access is required is in the Repositories, because in order to retrieve data from wherever, entities instances will be necessary, and in all other places. The repositories could also provide a public "GetCleanInstanceOf()" right? Then all other places will be just fine working with the interfaces instead of concrete classes... Secondary Question Should "Administration" implement the concrete object for "IAccessControlGeneralRepository" or the Core should?

    Read the article

  • What's the best way to create a static utility class in python? Is using metaclasses code smell?

    - by rsimp
    Ok so I need to create a bunch of utility classes in python. Normally I would just use a simple module for this but I need to be able to inherit in order to share common code between them. The common code needs to reference the state of the module using it so simple imports wouldn't work well. I don't like singletons, and classes that use the classmethod decorator do not have proper support for python properties. One pattern I see used a lot is creating an internal python class prefixed with an underscore and creating a single instance which is then explicitly imported or set as the module itself. This is also used by fabric to create a common environment object (fabric.api.env). I've realized another way to accomplish this would be with metaclasses. For example: #util.py class MetaFooBase(type): @property def file_path(cls): raise NotImplementedError def inherited_method(cls): print cls.file_path #foo.py from util import * import env class MetaFoo(MetaFooBase): @property def file_path(cls): return env.base_path + "relative/path" def another_class_method(cls): pass class Foo(object): __metaclass__ = MetaFoo #client.py from foo import Foo file_path = Foo.file_path I like this approach better than the first pattern for a few reasons: First, instantiating Foo would be meaningless as it has no attributes or methods, which insures this class acts like a true single interface utility, unlike the first pattern which relies on the underscore convention to dissuade client code from creating more instances of the internal class. Second, sub-classing MetaFoo in a different module wouldn't be as awkward because I wouldn't be importing a class with an underscore which is inherently going against its private naming convention. Third, this seems to be the closest approximation to a static class that exists in python, as all the meta code applies only to the class and not to its instances. This is shown by the common convention of using cls instead of self in the class methods. As well, the base class inherits from type instead of object which would prevent users from trying to use it as a base for other non-static classes. It's implementation as a static class is also apparent when using it by the naming convention Foo, as opposed to foo, which denotes a static class method is being used. As much as I think this is a good fit, I feel that others might feel its not pythonic because its not a sanctioned use for metaclasses which should be avoided 99% of the time. I also find most python devs tend to shy away from metaclasses which might affect code reuse/maintainability. Is this code considered code smell in the python community? I ask because I'm creating a pypi package, and would like to do everything I can to increase adoption.

    Read the article

  • [EF + Oracle] Inserting Data (Sequences) (2/2)

    - by JTorrecilla
    Prologue In the previous chapter we have see how to create DB records with EF, now we are going to Some Questions about Oracle.   ORACLE One characteristic from SQL Server that differs from Oracle is “Identity”. To all that has not worked with SQL Server, this property, that applies to Integer Columns, lets indicate that there is auto increment columns, by that way it will be filled automatically, without writing it on the insert statement. In EF with SQL Server, the properties whose match with Identity columns, will be filled after invoking SaveChanges method. In Oracle DB, there is no Identity Property, but there is something similar. Sequences Sequences are DB objects, that allow to create auto increment, but there are not related directly to a Table. The syntax is as follows: name, min value, max value and begin value. 1: CREATE SEQUENCE nombre_secuencia 2: INCREMENT BY numero_incremento 3: START WITH numero_por_el_que_empezara 4: MAXVALUE valor_maximo | NOMAXVALUE 5: MINVALUE valor_minimo | NOMINVALUE 6: CYCLE | NOCYCLE 7: ORDER | NOORDER 8:    How to get sequence value? To obtain the next val from the sequence: 1: SELECT nb_secuencia.Nextval 2: From Dual Due to there is no direct way to indicate that a column is related to a sequence, there is several ways to imitate the behavior: Use a Trigger (DB), Use Stored Procedures or Functions(…) or my particularly option. EF model, only, imports Table Objects, Stored Procedures or Functions, but not sequences. By that, I decide to create my own extension Method to invoke Next Val from a sequence: 1: public static class EFSequence 2: { 3: public static int GetNextValue(this ObjectContext contexto, string SequenceName) 4: { 5: string Connection = ConfigurationManager.ConnectionStrings["JTorrecillaEntities2"].ConnectionString; 6: Connection=Connection.Substring(Connection.IndexOf(@"connection string=")+19); 7: Connection = Connection.Remove(Connection.Length - 1, 1); 8: using (IDbConnection con = new Oracle.DataAccess.Client.OracleConnection(Connection)) 9: { 10: using (IDbCommand cmd = con.CreateCommand()) 11: { 12: con.Open(); 13: cmd.CommandText = String.Format("Select {0}.nextval from DUAL", SequenceName); 14: return Convert.ToInt32(cmd.ExecuteScalar()); 15: } 16: } 17:  18: } 19: } This Object Context’s extension method are going to invoke a query with the Sequence indicated by parameter. It takes the connection strings from the App settings, removing the meta data, that was created by VS when generated the EF model. And then, returns the next value from the Sequence. The next value of a Sequence is unique, by that, when some concurrent users are going to create records in the DB using the sequence will not get duplicates. This is my own implementation, I know that it could be several ways to do and better ways. If I find any other way, I promise to post it. To use the example is needed to add a reference to the Oracle (ODP.NET) dll.

    Read the article

  • IIS7 - Virtual Directories' Parent Paths behaving differently than previous versions

    - by MisterZimbu
    I'm doing a migration of a web server running on IIS 5 to IIS 7. I'm noticing that the virtual directories are behaving differently between the two. I have a site located at c:\inetpub\SiteName. This site contains a virtual directory "bob" that points at c:\virtualdirs\bob. There's a script in the bob folder (script.asp) that contains just: <!--#include virtual="../index.asp"--> I'm noticing different behaviors between IIS5 and IIS7 when I attempt to run the script by going to http://SiteName/bob/script.asp: IIS5 references the parent path of the site, and imports c:\inetpub\SiteName\index.asp. IIS7 references the parent folder of the virtual directory, and looks for a c:\virtualdirs\index.asp (that doesn't exist). Doing a Response.Write of a Server.MapPath confirms this. Is there a way to get IIS7 to behave like IIS5 in this regard? Unfortunately, moving index.asp and its logic into the virtualdirs folder isn't an option as the virtual directory will be shared across many sites (with differing index.asps). Thanks.

    Read the article

  • Sharepoint Workflow "Failed on Start" only when powershell import script is called from task scheduler

    - by Matt Keller
    I created a simple PowerShell script that takes an XML file in a local directory on our sharepoint server and imports it into a specific SharePoint form library. (Content management enabled library if that makes any difference) This script works flawlessly if i run it from the PowerShell command line manually. I call it like such: ".\script_name.ps1". It completes without error and the item is imported into the form library successfully. The workflow begins on the item and everything is happy dandy. However, i run into issues when i setup a scheduled task using Windows Server 2008 R2's task manager. The task runs the script without error and it does actually import the XML into the form library. I looks perfectly normal just as if i had run the script manually. However, after about 10 or 20 minutes the workflow status for that item changes from "In progress" to "Failed on Start (Retrying)". The scheduled task in question is a basic task and has only one action. (Start a program) The "program/script" box is set to "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" and the "Add arguments" box is set to the path of the actual ps1 script. (C:\scripts\sharepoint_import.ps1) I've tried running the task as various users. I've also tried with and without the "Run with highest privileges" check box. Nothing seems to work. For reference, here is the script i am using to import items into the form library.

    Read the article

  • How do I remove a USB drive's write protection?

    - by nate
    I have a SanDisk Cruser Blade USB stick that suddenly seems to be write protected. I tried running DiskPart but after I write the command "attributes disk clear readonly" it displays this: Microsoft DiskPart version 5.1.3565 ADD - Add a mirror to a simple volume. ACTIVE - Marks the current basic partition as an active boot partition. ASSIGN - Assign a drive letter or mount point to the selected volume. BREAK - Break a mirror set. CLEAN - Clear the configuration information, or all information, off the disk. CONVERT - Converts between different disk formats. CREATE - Create a volume or partition. DELETE - Delete an object. DETAIL - Provide details about an object. EXIT - Exit DiskPart EXTEND - Extend a volume. HELP - Prints a list of commands. IMPORT - Imports a disk group. LIST - Prints out a list of objects. INACTIVE - Marks the current basic partition as an inactive partition. ONLINE - Online a disk that is currently marked as offline. REM - Does nothing. Used to comment scripts. REMOVE - Remove a drive letter or mount point assignment. REPAIR - Repair a RAID-5 volume. RESCAN - Rescan the computer looking for disks and volumes. RETAIN - Place a retainer partition under a simple volume. SELECT - Move the focus to an object. It's like when you type help at the DiskPart prompt, so how do I get past this? This problem started when I plugged the stick into a laptop which had viruses, if that's any help.

    Read the article

  • SharePoint: what does "System.Runtime.InteropServices.COMException (0x81071003)" mean?

    - by kpinhack
    Hallo, i've got some code that imports documents into a SharePoint (WSS 3.0 SP1) document-library. That code works most of the time without any problems, but sometimes the document is not imported into the document-library and i get this nasty exception instead. Microsoft.SharePoint.SPException: Unable to update the information in the Microsoft Office document myFileName. ---> System.Runtime.InteropServices.COMException (0x81071003): Unable to update the information in the Microsoft Office document myFileName. bei Microsoft.SharePoint.Library.SPRequestInternalClass.AddOrUpdateItem(String bstrUrl, String bstrListName, Boolean bAdd, Boolean bSystemUpdate, Boolean bPreserveItemVersion, Boolean bUpdateNoVersion, Int32& plID, String& pbstrGuid, Guid pbstrNewDocId, Boolean bHasNewDocId, String bstrVersion, Object& pvarAttachmentNames, Object& pvarAttachmentContents, Object& pvarProperties, Boolean bCheckOut, Boolean bCheckin, Boolean bMigration, Boolean bPublish) bei Microsoft.SharePoint.Library.SPRequest.AddOrUpdateItem(String bstrUrl, String bstrListName, Boolean bAdd, Boolean bSystemUpdate, Boolean bPreserveItemVersion, Boolean bUpdateNoVersion, Int32& plID, String& pbstrGuid, Guid pbstrNewDocId, Boolean bHasNewDocId, String bstrVersion, Object& pvarAttachmentNames, Object& pvarAttachmentContents, Object& pvarProperties, Boolean bCheckOut, Boolean bCheckin, Boolean bMigration, Boolean bPublish) What does this exception mean? And why does it occur only sometimes? Thanks!

    Read the article

  • Code to update HyperV Export file

    - by Andy Schneider
    I am using the HyperV Module from Codeplex to do a "config only" export from a 2008R2 Hyper-V server. In order to import the configuration on another HyperV server, I need to edit the value of CopyVMStorage in the EXP file. This file is an XML file. I wrote the following code in PowerShell to do the update for me. The variable $existing is the existing exp file. $xml = [xml](get-content $existing) $xpath = '//PROPERTY[@NAME ="CopyVmStorage"]' foreach ($node in $xml.SelectNodes($xpath)) {$node.Value = 'TRUE'} $xml.Save($existing) This code makes the correct changes to the XML. However, when I go to import the file on the Hyper-V server, I get an error that says the file format is incorrect. I am wondering if the encoding of the file is incorrect or if there is something else going on. If I edit the file manually in wordpad, it imports without an issue. The filename is a GUID with a .exp extension, and it appears that the file name is too long for notepad to open. Notepad throws an error trying to open the file, which is why I went with WordPad. I have noticed that the file that is updated with PowerShell comes out formatted whereas the raw file is xml all bunched together with no whitespace. Any ideas on what "file format" means in this HyperV error message and how I might be able to use my code to automate this change in the XML without changing the file format?

    Read the article

  • WinHttpCertCfg not importing certificate

    - by Ramon Zarazua
    I need to setup a deployment script that imports an SSL certificate that my service uses. I have tried importing with WinHttpCertCfg and with CertMgr to no avail. Here are the command-line arguments I have tried to use with both: winhttpcertcfg.exe -i <certname>.pfx -c LOCAL_MACHINE\My -p <password> -a <user service runs as> and CertMgr.exe -add -all -s -r localMachine -c <cert name> My It seems from what I have investigated that CertMgr does not allow you to import certificates with a password, so I'd rather get winhttpcertcfg working. When I run them I get the following output: WinHttpCertCfg: Microsoft (R) WinHTTP Certificate Configuration Tool Copyright (C) Microsoft Corporation 2001. CertMgr: CertMgr Succeeded However, when I look into the local machine certificates in MMC, try to load them from my service, or list it out through winhttpcertcfg, or even looking at the registry in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\MY\Certificates it is not found. I have tried all of the following: If I install the cert manually (Through CertMgr.msc dialogs) it works. The user installing is running as administrator The user installing has full access on the certificate The tools print out an error when something is wrong (wrong password) Tried it in multiple machines (All of them server 2008 R2) At this point I am officially out of ideas. Thank you.

    Read the article

  • Server Clustering (Django, Apache, Nginx, Postgres)

    - by system-matrix
    I have a project deployed with django, Apache, Nginx and Postgres. The project has requirement of live data viewable to customers. The projects main points are: 1. Devices in field send data to server(devices are also like website users) after login. 2. There is background import process which imports the uploaded data in postgres. 3. The webusers of the system use this data and can send commands to the devices, which devices read when they login. 4. There are also background analysis routines running on the data. All the above mentioned setup and system is deployed on one amazon EC2 cloud machine. The project currently supports over 600 devices and 400 users. But as the number of devices are increasing with time the performance of the server is going down. We want to extend this project so that it can support more and more devices. My initial thinking is, We will create one more server like current one and divide the devices amongst these to servers. But Again We need a central user and device managment point though django admin. Any Ideas? What are the best possible ways to create a scalable architecture? How can I create a Postgres Cluster and Use it with Django, if possible?

    Read the article

  • How do I `SUM` by multiple columns in Excel

    - by dwwilson66
    I have a comma delimited file that includes two columns date/time (which imports as Excel's mm/dd/yyyy hh:mm custom format) and status of 1 or 0. The status represents a piece of equipment either being on or off. I'm trying to generate a graph that will show, hours up vs. down by day. CONSIDER: 1/1/2012 00:00, 1 1/1/2012 03:00, 0 1/1/2012 14:00, 1 1/3/2012 00:00, 0 This tells me that the equipment was up for three hours, down for eleven hours, and then up for thirty-four hours (across two calendar days). However, I would like to generate a graph that shows how many hours PER DAY we were up or down. CONSIDER: 1/1 XXXXXXXXXXXXX----------- (up 13, down 11) 1/2 XXXXXXXXXXXXXXXXXXXXXXXX (up 24) To me, it seems that I need to generate a dataset summing HOURS by STATUS by CALENDAR DAY...but I can't seem to find a flavor of pivot table or nested SUM(IF(SUMIF(...))) combination to make it work. Most troubling is accounting for date changes...in my example above, since my uptime starting at 14:00 on 1/1/2012 crosses midnight, I need to know that 10 uptime hours get totalled with 1/1/2012 and 24 uptime hours get totalled with 1/2/2012. I may be able to do something with a calendar list to drive the date summation, but then I need a way to compare 01/01/2012 to 01/01/2012 03:00 as equal. There's got to be a way along the lines of if(INTEGER-PORTIONS-OF-SERIAL-DATES-ARE-EQUAL,TOTAL-HOURS-IF-VALUE-IS_1,0) but nothing's worked so far. Any suggestions? I've been battling this most of the day, and need a fresh perspective. Thanks

    Read the article

  • Allowing non-admin users to unstick the print spooler

    - by Reafidy
    I currently have an issue where the print que is getting stuck on a central print server (windows server 2008). Using the "Clear all documents" function does not clear it and gets stuck too. I need non-admin users to be able to clear the print cue from there work stations. I have tried using the following winforms program which I created and allows a user to stop the print spooler, delete printer files in the "C:\Windows\System32\spool\PRINTERS folder" and then start the print spooler but this functionality requires the program to be runs as an administrator, how can I allow my normal users to execute this program without giving them admin privileges? Or is there another way I can allow normal user to clear the print que on the server? Imports System.ServiceProcess Public Class Form1 Private Sub Button1_Click(sender As System.Object, e As System.EventArgs) Handles Button1.Click ClearJammedPrinter() End Sub Public Sub ClearJammedPrinter() Dim tspTimeOut As TimeSpan = New TimeSpan(0, 0, 5) Dim controllerStatus As ServiceControllerStatus = ServiceController1.Status Try If ServiceController1.Status <> ServiceProcess.ServiceControllerStatus.Stopped Then ServiceController1.Stop() End If Try ServiceController1.WaitForStatus(ServiceProcess.ServiceControllerStatus.Stopped, tspTimeOut) Catch Throw New Exception("The controller could not be stopped") End Try Dim strSpoolerFolder As String = "C:\Windows\System32\spool\PRINTERS" Dim s As String For Each s In System.IO.Directory.GetFiles(strSpoolerFolder) System.IO.File.Delete(s) Next s Catch ex As Exception MsgBox(ex.Message) Finally Try Select Case controllerStatus Case ServiceControllerStatus.Running If ServiceController1.Status <> ServiceControllerStatus.Running Then ServiceController1.Start() Case ServiceControllerStatus.Stopped If ServiceController1.Status <> ServiceControllerStatus.Stopped Then ServiceController1.Stop() End Select ServiceController1.WaitForStatus(controllerStatus, tspTimeOut) Catch MsgBox(String.Format("{0}{1}", "The print spooler service could not be returned to its original setting and is currently: ", ServiceController1.Status)) End Try End Try End Sub End Class

    Read the article

  • Changing the modified date of a message in Exchange 2010

    - by jgoldschrafe
    My organization is in the middle of a process to move their Exchange 2010 messaging system from one archiving platform to another. As part of this process, we need to restore all archived messages back into users' email accounts, and then let the new system import them again. The problem is that when the messages are dumped back, the modified date on the message is set to the date it was restored, which trips up message archiving and basically means nobody will have anything archived for six months. So you don't have to ask: no, our archiving platform only uses the modified timestamp on the message and cannot be altered to temporarily use the sent or received timestamp instead to determine whether to archive it. We and others have asked for the feature, but it doesn't exist right now. What we're looking for is a method to go through the user's mailbox and alter the modified timestamp of each message (or preferably received more than X months ago) to the received date of the message. We also don't want to spend more on this tool per user than we're spending on the archiving solution in the first place. We've run across a few tools that are something ridiculous like $25 per user. I don't think we're even paying close to that for Exchange and the archiving solution put together. Whatever we settle on should function on a live mailbox with no downtime. Playing around with PST imports and hacky little things like that isn't going to work. We're fine with programming/scripting, if anyone knows the best way through PowerShell, COM automation or some other way to best handle this.

    Read the article

  • Import data in Excel that doesn't have a row delimiter, but number of columns is known

    - by Alex B
    So i have this text file that looks something like this: Header1 Header2 Header3 Header4 A1 B1 C1 D1 A2 B2 C2 D2 and so on. When imported, I'd want the data to format itself in 4 columns. I tried the Get External Data from Text, and it successfully imports it, but it doesn't wrap it around, so it just keeps making columns for every space. I'd want it to go on the next line after 4 (in this case) elements have been added. What's the simplest way to achieve this? EDIT: My answer follows, since I'm not yet allowed to answer my own questions yet. The Excel function I needed is called indirect(). Not sure how it actually works though, so hopefully someone can help out with that, but the function call that worked for me is =INDIRECT(ADDRESS((ROW(A1)-1)*4+COLUMN(A1),1)) which i found over here: http://www.ozgrid.com/forum/showthread.php?t=101584&p=456031#post456031 Note: this required me to add the text to excel where i'd get this row full of columns, and then flip it so that i'd have a column full of rows.

    Read the article

  • Join multiple consecutive SQLite database dump files into 1 common database? Purpose: Search through ENTIRE Chrome Browsing History

    - by porg
    Google Chrome 's default web browsing history search engine only lets you access the records of the recent 100 days. Nevertheless in your application data, Chrome keeps your entire browsing history in SQLite database files, with the file naming scheme of "History Index YYYY-MM". I am looking for a way to search… …through my entire browsing history, …with sophisticated filters (limit search terms to certain fields such as URL, domain, title, body text; wildcard or regex terms, date ranges). … in … …either some ready-made software. eHistory came close, as it can limit terms to fields, but it lacks wildcards/regexes, and has the same limited time horizon as the default search. Beyond that, I could not find any suited Chrome extension or standalone (Mac) app. …or a command line to join multiple SQLite database files into one database, which I can then query (with the full syntax power). In the spirit of the pseudo code below: Preferred this way: sqlite --targetDatabase ChromeHistoryAll --importFiles /path/to/ChromeAppData/History\ Index* --importOnlyYetUnknownFiles Or if my desired feature --importOnlyYetUnknownFiles is not possible (feature could also be called "avoid duplicate imports by checking UIDs"), then by explicitly only importing files, of which I know, that they have yet not been imported into the ChromeHistoryAll database: cd ChromeAppData; sqlite --databaseTarget ChromeHistoryAll --importFiles YetNotImported1 YetNotImported2 YetNotImported3 All my queries I would then perform in the database "ChromeHistoryAll" P.S.: Additional question of general interest: Is there a way to perform a database query in a temporary database which was created on-the-fly from multiple files? Like: sqlite --query="SQL query" --targetDatabase DbAll --DBtemporaryInRAM --importFiles db1 db2 db3 This is surely not applicable for my Chrome question, as these History Index files have a combined file size of 500MB together, thus such a query would be of bad performance. But it could come handy in other situations.

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Can plugins loaded with MEF resolve their own internal dependencies with the same MEF container for

    - by Dave
    From my experimentation, I think the answer is "kind of", but I could have made a mistake. I have an application that loads appliance plugins with MEF. That part is working fine. Now let's say that my BlenderAppliance wants to resolve several of its dependencies with MEF, which each implement IApplianceFeature. I've just used the ImportMany attribute to my plugin. I made sure to create the plugin using MEF so that the Imports work properly. I said "kind of" because some of the plugin's internals (i.e. the model) are loading with MEF just fine, but the IApplianceFeatures aren't. The difference here is that the IApplianceFeatures are themselves, assemblies. And at the moment, they are in one folder above that of the plugin itself, i.e. + application folder | IApplianceFeature1.dll | IApplianceFeature2.dll +---+ plugin folder | BlenderAppliance.dll Now if my application uses an AggregateCatalog to load the "." and ".\plugins" folders, why doesn't it ever load the IApplianceFeature assemblies for me? Is it possible / advisable to have the plugin create its own MEF container to resolve its dependencies, or does really nasty stuff happen? If you have any stories about this scenario, please share. :)

    Read the article

  • Problem with Informix JDBC, MONEY and decimal separator in string literals

    - by Michal Niklas
    I have problem with JDBC application that uses MONEY data type. When I insert into MONEY column: insert into _money_test (amt) values ('123.45') I got exception: Character to numeric conversion error The same SQL works from native Windows application using ODBC driver. I live in Poland and have Polish locale and in my country comma separates decimal part of number, so I tried: insert into _money_test (amt) values ('123,45') And it worked. I checked that in PreparedStatement I must use dot separator: 123.45. And of course I can use: insert into _money_test (amt) values (123.45) But some code is "general", it imports data from csv file and it was safe to put number into string literal. How to force JDBC to use DBMONEY (or simply dot) in literals? My workstation is WinXP. I have ODBC and JDBC Informix client in version 3.50 TC5/JC5. I have set DBMONEY to just dot: DBMONEY=. EDIT: Test code in Jython: import sys import traceback from java.sql import DriverManager from java.lang import Class Class.forName("com.informix.jdbc.IfxDriver") QUERY = "insert into _money_test (amt) values ('123.45')" def test_money(driver, db_url, usr, passwd): try: print("\n\n%s\n--------------" % (driver)) db = DriverManager.getConnection(db_url, usr, passwd) c = db.createStatement() c.execute("delete from _money_test") c.execute(QUERY) rs = c.executeQuery("select amt from _money_test") while (rs.next()): print('[%s]' % (rs.getString(1))) rs.close() c.close() db.close() except: print("there were errors!") s = traceback.format_exc() sys.stderr.write("%s\n" % (s)) print(QUERY) test_money("com.informix.jdbc.IfxDriver", 'jdbc:informix-sqli://169.0.1.225:9088/test:informixserver=ol_225;DB_LOCALE=pl_PL.CP1250;CLIENT_LOCALE=pl_PL.CP1250;charSet=CP1250', 'informix', 'passwd') test_money("sun.jdbc.odbc.JdbcOdbcDriver", 'jdbc:odbc:test', 'informix', 'passwd') Results when I run money literal with dot and comma: C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123,45') com.informix.jdbc.IfxDriver -------------- [123.45] sun.jdbc.odbc.JdbcOdbcDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: [Informix][Informix ODBC Driver][Informix]Character to numeric conversion error C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123.45') com.informix.jdbc.IfxDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: Character to numeric conversion error sun.jdbc.odbc.JdbcOdbcDriver -------------- [123.45]

    Read the article

  • How to prevent duplicate records being inserted with SqlBulkCopy when there is no primary key

    - by kscott
    I receive a daily XML file that contains thousands of records, each being a business transaction that I need to store in an internal database for use in reporting and billing. I was under the impression that each day's file contained only unique records, but have discovered that my definition of unique is not exactly the same as the provider's. The current application that imports this data is a C#.Net 3.5 console application, it does so using SqlBulkCopy into a MS SQL Server 2008 database table where the columns exactly match the structure of the XML records. Each record has just over 100 fields, and there is no natural key in the data, or rather the fields I can come up with making sense as a composite key end up also having to allow nulls. Currently the table has several indexes, but no primary key. Basically the entire row needs to be unique. If one field is different, it is valid enough to be inserted. I looked at creating an MD5 hash of the entire row, inserting that into the database and using a constraint to prevent SqlBulkCopy from inserting the row,but I don't see how to get the MD5 Hash into the BulkCopy operation and I'm not sure if the whole operation would fail and roll back if any one record failed, or if it would continue. The file contains a very large number of records, going row by row in the XML, querying the database for a record that matches all fields, and then deciding to insert is really the only way I can see being able to do this. I was just hoping not to have to rewrite the application entirely, and the bulk copy operation is so much faster. Does anyone know of a way to use SqlBulkCopy while preventing duplicate rows, without a primary key? Or any suggestion for a different way to do this?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >