Search Results

Search found 1078 results on 44 pages for 'bill dami'.

Page 42/44 | < Previous Page | 38 39 40 41 42 43 44  | Next Page >

  • Poor Customer Service Example

    - by MightyZot
    Lately I have been frustrated by examples of poor customer service. At least one is worth writing about because I don’t think companies realize the effects of their service policies on loyal customers. Bad Customer Service Example #1 Recently, I received an offer in the mail from my cable company, suddenLink. The offer was for an updated TiVo for $12/mo. Normally I ignore offers like this one because I already have the service they’re offering and many times advertisers are offering alternatives to what is already an excellent product offering. I tend to exhibit a high level of loyalty to the products and brands that I use. In this case, we were looking to upgrade our TiVo and this deal is attractive for several reasons: I don’t want to pay a huge amount up-front for the device, so paying a monthly amount for the device is attractive to me. My entertainment is almost all on a single invoice. I’m no longer going to be billed by suddenLink and TiVo. TiVo is still involved, so I am still loyal to the brand I love. I have resisted moving to other DVRs and services for over a decade. I called suddenLink to order the new TiVo and was rewarded with great customer service. In fact, I can’t remember ever getting poor customer service from suddenLink. They are always there to answer my technical support questions and they are very responsive to outages. Then I called TiVo. First of all, I chose the option on the phone system to change or cancel my service, which was consequently met by an inordinate hold time. (I’m calling this time inordinate because I get through very quickly if I want to purchase something.) This is a trend that I’ve noticed with companies – if you want me to be loyal to you, it should be just as easy to cancel your service as it is to purchase it. Because, I should never be cancelling because I am unhappy. And, if you ever want my business again, or more importantly a reference, then you’d better make the exit door open just as easy as the enter door. After quite some time on hold, I talked to “Victor” who was very courteous. Victor canceled my service and then told me that I could keep my current TiVo and transfer recorded programs to it from the new TiVo.  Cool I said, but what about the cost?  He said there was no extra cost.  This was also attractive to me because I paid for my TiVo and it would be good to use it for something at least.  That was four months ago. This month I noticed that TiVo was still charging me for my original service. I was a little upset, but I decided to give them the benefit of the doubt. After all, I am a loyal TiVo customer and I have resisted moving to other solutions for over a decade. I’m sure they will do whatever it takes to keep my business, through TiVo or through suddenLink. After quite some time on hold, I was able to talk to a customer service representative, “Les”. I explained that I am a loyal TiVo customer, but I purchased this deal through my cable provider. I’m still with TiVo, I just wanted a single bill and to take advantage of the pay-over-time option. “Les” told me that he was very sorry to hear that I’m leaving TiVo, to which I responded again that I wasn’t leaving TiVo, I just want one invoice, and to take advantage of the pay-over-time. So, after explaining that I requested a termination of the non-suddenLink account (TiVo can see both of course), I was put on hold again for quite some time while my refund was “approved”.  “Les” said that he could see my cancellation request back in July. Note that it is now November, so they have billed me inappropriately four times. After quite some time, he came back on the line and told me that he was able to “get me most of my money back.” He got approval to refund 90 days. Even though I requested cancellation of one of my accounts, TiVo has that cancellation request on file and they admit overbilling me, I am going to get “most” of my money back. To top this experience off, when we were ready to hang up, “Les” told me that he was sorry to see me go and that he hoped I would come back to TiVo again. Again, I explained to “Les” that I have not left TiVo. I am just paying them through suddenLink. At that point, he went into a small dissertation about how this is a special arrangement they have with suddenLink and very few others. He made me feel like I was doing something wrong. Why should I feel that way? TiVo made the deal with suddenLink, not me, and the deal seemed like a good compromise for me to be able to get what I need. Here is what TiVo Customer Service accomplished on those two calls – I no longer feel like I need to be loyal to the TiVo brand or service. If I had been treated better on these two calls, I would still be recommending TiVo to my friends. They would still be getting revenue from a loyal customer, who paid the same rate for over a decade, and this article wouldn’t be here for you to read. Interesting… In my opinion, if you want brand loyalty, be loyal to your customers!

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

  • To refund or not to refund this client?

    - by Mahalia Samuels
    I'd really appreciate your advice on an ongoing project. I presented my client with a proposal and design samples which he approved, and he paid in full instead of the 50% upfront deposit as I'd given him a generous discount. He was then slow in furnishing me with some of the content, but once we did, he expected the website to be finished immediately which was not possible. Because he needed it done urgently, we agreed to try to get it done about 10 working days after the content was provided, but the developer who was helping me let me down. The next week, I completed the website myself and uploaded it to the server on a Friday afternoon. He then calls and texts me on following Sunday while I'm at church to say it's not online (there was probably a problem with his browser). The next morning, I received an email from him demanding a full refund within two days because he couldn't see the website (even though it was live, and I tested it on multiple browsers, a different computer and my phone), and he called me shouting at me because he couldn't access it. Finally when he was able to access it, he was unhappy with a certain detail regarding the slideshow which I began fixing and which was done the next day. He then referred me to another website and said he wanted it to look similar but not identical to it in terms of the layout. He also now wanted to add more features which were not in the original design. I got a designer to work on a new design which I sent to him for review, which if approved would be completed by 15 October, and he approved it last Thursday. He then called me yesterday to say that he wanted to change the design - he only approved it out of impatience. He now wants the website to be more similar to the other website he referred me to and he wants it done before the 15th! Then, he says to me that other people have done websites for him in three days - website's he's complained to me about for lacking dimension because they were just premium themes, whereas we'd designed and coded from scratch. I'm thinking of finishing the website but refunding him in full (or at least the refundable 50%) less domain registration and other non-refundable amounts, just to avoid further escalation of this matter and having him call me next week and say he wants to change it again. These are the applicable terms and conditions as laid out in the agreement: Total amount due for this project is Amount A. Client shall pay Consultant a deposit of Amount B (50% of total amount due for project) in advance before any work commences on the Project. The balance is due within 7 working days of completion of project. Deposit is non-refundable. Should client opt to host elsewhere, applicable transferral fee of Amount C will apply. Estimated project completion time frame is 14 to 30 days from the date Client furnishes Consultant with Brief and all other required media and data, provided that Client has made payment to secure the project. Consultant will make every effort to meet agreed upon due dates. The Client should be aware that failure to submit required information or materials, or last minute changes and excessive changes may cause subsequent delays. Client delays could result in significant delays in delivery of finished work. Major changes in client input or direction or brief will be charged at normal rates. Any work the Client wishes Consultant to create, which is not specified in the attached Proposal will be considered an additional service. Client agrees to pay Consultant for any additional expenses or additional services not included in the attached quotation and proposal if requested by the Client. Web design credit in the name of the Consultant, and link to Consultant’s website shall be placed on the footer of the final Website. Either party may terminate this Agreement by giving 7 days written notice to the other of such termination. In the event that Work is postponed or terminated at the request of the Client, Consultant shall have the right to bill pro rata at full rates for work completed through the date of that request, while reserving all rights under this Agreement. If additional payment is due, this shall be payable within seven days of the Client's written notification to stop work. In the event of termination, the Client shall also pay any expenses incurred by Consultant and the Consultant shall own all rights to the Work. Advice please?

    Read the article

  • Should I deal with files longer than MAX_PATH?

    - by John
    Just had an interesting case. My software reported back a failure caused by a path being longer than MAX_PATH. The path was just a plain old document in My Documents, e.g.: C:\Documents and Settings\Bill\Some Stupid FOlder Name\A really ridiculously long file thats really very very very..........very long.pdf Total length 269 characters (MAX_PATH==260). The user wasn't using a external hard drive or anything like that. This was a file on an Windows managed drive. So my question is this. Should I care? I'm not saying can I deal with the long paths, I'm asking should I. Yes I'm aware of the "\?\" unicode hack on some Win32 APIs, but it seems this hack is not without risk (as it's changing the behaviour of the way the APIs parse paths) and also isn't supported by all APIs . So anyway, let me just state my position/assertions: First presumably the only way the user was able to break this limit is if the app she used uses the special Unicode hack. It's a PDF file, so maybe the PDF tool she used uses this hack. I tried to reproduce this (by using the unicode hack) and experimented. What I found was that although the file appears in Explorer, I can do nothing with it. I can't open it, I can't choose "Properties" (Windows 7). Other common apps can't open the file (e.g. IE, Firefox, Notepad). Explorer will also not let me create files/dirs which are too long - it just refuses. Ditto for command line tool cmd.exe. So basically, one could look at it this way: a rouge tool has allowed the user to create a file which is not accessible by a lot of Windows (e.g. Explorer). I could take the view that I shouldn't have to deal with this. (As an aside, this isn't an vote of approval for a short max path length: I think 260 chars is a joke, I'm just saying that if Windows shell and some APIs can't handle 260 then why should I?). So, is this a fair view? Should I say "Not my problem"? Thanks! John

    Read the article

  • XSLT: Display unique rows of filtered XML recordset

    - by Chris G.
    I've got a recordset that I'm filtering on a particular field (i.e. Manager ="Frannklin"). Now I'd like to group the results of that filtered recordset based on another field (Client). I can't seem to get Muenchian grouping to work right. Any thoughts? TIA! CG My filter looks like this: <xsl:key name="k1" match="Row" use="@Manager"/> <xsl:param name="dvt_filterval">Frannklin</xsl:param> <xsl:variable name="Rows" select="/dsQueryResponse/Rows/Row" /> <xsl:variable name="FilteredRowsAttr" select="$Rows[normalize-space(@*[name()=$FieldNameNoAtSign])=$dvt_filterval ]" /> Templates <xsl:apply-templates select="$FilteredRowsAttr[generate-id() = generate-id(key('k1',@Manager))]" mode="g1000a"> </xsl:apply-templates> <xsl:template match="Row" mode="g1000a"> Client: <xsl:value-of select="@Client"/> </xsl:template> Results I'm getting Client: Beta Client: Beta Client: Beta Client: Gamma Client: Delta Results I want Client: Beta Client: Gamma Client: Delta Sample recordset <dsQueryResponse> <Rows> <Row Manager="Smith" Client="Alpha " Project_x0020_Name="Annapolis" PM_x0023_="00123" /> <Row Manager="Ford" Client="Alpha " Project_x0020_Name="Brown" PM_x0023_="00124" /> <Row Manager="Cronkite" Client="Beta " Project_x0020_Name="Gannon" PM_x0023_="00129" /> <Row Manager="Clinton, Bill" Client="Beta " Project_x0020_Name="Harvard" PM_x0023_="00130" /> <Row Manager="Frannklin" Client="Beta " Project_x0020_Name="Irving" PM_x0023_="00131" /> <Row Manager="Frannklin" Client="Beta " Project_x0020_Name="Jakarta" PM_x0023_="00132" /> <Row Manager="Frannklin" Client="Beta " Project_x0020_Name="Vassar" PM_x0023_="00135" /> <Row Manager="Jefferson" Client="Gamma " Project_x0020_Name="Stamford" PM_x0023_="00141" /> <Row Manager="Cronkite" Client="Gamma " Project_x0020_Name="Tufts" PM_x0023_="00142" /> <Row Manager="Frannklin" Client="Gamma " Project_x0020_Name="UCLA" PM_x0023_="00143" /> <Row Manager="Jefferson" Client="Gamma " Project_x0020_Name="Villanova" PM_x0023_="00144" /> <Row Manager="Carter" Client="Delta " Project_x0020_Name="Drexel" PM_x0023_="00150" /> <Row Manager="Clinton" Client="Delta " Project_x0020_Name="Iona" PM_x0023_="00151" /> <Row Manager="Frannklin" Client="Delta " Project_x0020_Name="Temple" PM_x0023_="00152" /> <Row Manager="Ford" Client="Epsilon " Project_x0020_Name="UNC" PM_x0023_="00157" /> <Row Manager="Clinton" Client="Epsilon " Project_x0020_Name="Berkley" PM_x0023_="00158" /> </Rows> </dsQueryResponse>

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Rails: How do I unserialize from database?

    - by Macint
    Hello, I am currently trying to save information for an invoice/bill. On the invoice I want to show what the total price is made up of. The procedures & items, their price and the qty. So in the end I hope to get it to look like this: Consult [date] [total_price] Procedure_name [price] [qty] Procedure_name [price] [qty] Consult [date] [total_price] Procedure_name [price] [qty] etc... All this information is available through the database but i want to save the information as a separate copy. That way if the user changes the price of some procedures the invoice information is still correct. I thought i'd do this by serializing and save the data to a column (consult_data) in the Invoice table. My Model: class Invoice < ActiveRecord::Base ...stuff... serialize :consult_data ... end This is what I get from the form (1 consult and 3 procedures): {"commit"=>"Save draft", "authenticity_token"=>"MZ1OiOCtj/BOu73eVVkolZBWoN8Fy1skHqKgih7Sbzw=", "id"=>"113", "consults"=>[{"consult_date"=>"2010-02-20", "consult_problem"=>"ABC", "procedures"=>[{"name"=>"asdasdasd", "price"=>"2.0", "qty"=>"1"}, {"name"=>"AAAnd another one", "price"=>"45.0", "qty"=>"4"}, {"name"=>"asdasdasd", "price"=>"2.0", "qty"=>"1"}], "consult_id"=>"1"}]} My save action: def add_to_invoice @invoice = @current_practice.invoices.find_by_id(params[:id]) @invoice.consult_data=params[:consults] if @invoice.save render :text => "I think it worked" else render :text => "I don't think it worked'" end end It does save to the database and if I look at the entry in the console I can see that it is all there: consult_data: "--- \n- !map:HashWithIndifferentAccess \n consult_da..." (---The question---) But I can't seam to get back my data. I tried defining a variable to the consult_data attribute and then doing "variable.consult_problem" or "variable[:consult_problem]" (also tried looping) but it only throws no-method-errors back at me. How do I unserialize the data from the database and turn it back into hash that i can use? Thank you very much for any help!

    Read the article

  • Keeping track of business rules within IT department?

    - by evaldas-alexander
    I am looking for the best way to keep track of the business rules for both developers and everybody else (support staff / management) in a startup enviroment. The challenge is that our business model requires quite a lot of different business rules, which are created pretty much on the fly and evolving organically after that. After running this project for 3+ years, we have so many of such rules that often the only way to be sure about what the application is supposed to do in a certain situation is to go find the module responsible for that process and analyze its code and comments. That is all fine as long as you have one single developer who created the entire application from the scratch, but every new developer needs to go over pretty much entire codebase in order to understand how the application works. Even bigger problem is that non technical employees don't even have that option and therefore are forced to ask me pretty much every day how some certain case would be handled by the application. Quick example - we only start charging for our customer campaigns once they have been active for at least 72 hours, but at the same time we stop creating invoices for campaigns that belong to insolvent accounts and close such accounts within a month of the first failed charge. That does not apply to accounts that are set to "non-chargeable" which most commonly belongs to us since we are using the service ourselves. The invoices are created on the 1st of each month and include charges from the previous month + any current balance that the account might have. However, some customers are charged only 4 days after their invoice has been generated due to issues with their billing department. In addition to that, invoices are also created when customer deactivates his campaign, but that can only be done once the campaign is not longer under mandatory 6 month contract, unless account manager approves early deactivation. I know, that's quite a lot of rules that need to be taken into account when answering a question "when do we bill our customers", but actually I could still append an asterisk at the end of each sentence in order to disclose some rare exceptions. Of course, it would be easiest just to keep the business rules to the minimum, but we need to adapt to changing marketplace - i.e. less than a year ago we had no contracts whatsoever. One idea that I had so far was a simplistic wiki with categories corresponding to areas such as "Account activation", "Invoicing", "Collection procedures" and so on. Another idea would be to have giant interactive flowchart showing the entire customer "life cycle" from prospecting to account deactivation. What are your experiences / suggestions?

    Read the article

  • Comparing Nested object properties using C#

    - by Kumar
    I have a method which compares two objects and returns a list of all the property names which are different. public static IList<string> GetDifferingProperties(object source, object) { var sourceType = source.GetType(); var sourceProperties = sourceType.GetProperties(); var targetType = target.GetType(); var targetProperties = targetType.GetProperties(); var properties = (from s in sourceProperties from t in targetProperties where s.Name == t.Name && s.PropertyType == t.PropertyType && s.GetValue(source,null) != t.GetValue(target,null) select s.Name).ToList(); return properties; } For example if I have two classes as follows: public class Address { public string AddressLine1 { get; set; } public string AddressLine2 { get; set; } public string City { get; set; } public string State { get; set; } public string Zip { get; set; } } public class Employee { public string FirstName { get; set; } public string MiddleName { get; set; } public string LastName { get; set; } public Address EmployeeAddress { get; set; } } I am trying to compare the following two employee instances: var emp1Address = new Address(); emp1Address.AddressLine1 = "Microsoft Corporation"; emp1Address.AddressLine2 = "One Microsoft Way"; emp1Address.City = "Redmond"; emp1Address.State = "WA"; emp1Address.Zip = "98052-6399"; var emp1 = new Employee(); emp1.FirstName = "Bill"; emp1.LastName = "Gates"; emp1.EmployeeAddress = emp1Address; var emp2Address = new Address(); emp2Address.AddressLine1 = "Gates Foundation"; emp2Address.AddressLine2 = "One Microsoft Way"; emp2Address.City = "Redmond"; emp2Address.State = "WA"; emp2Address.Zip = "98052-6399"; var emp2 = new Employee(); emp2.FirstName = "Melinda"; emp2.LastName = "Gates"; emp2.EmployeeAddress = emp2Address; So when I pass these two employee objects to my GetDifferingProperties method currently it returns FirstName and EmployeeAddress, but it does not tell me which exact property (which in this case is Address1) in the EmployeeAddress has changed. How can I tweak this method to get something like EmployeeAddress.Address1?

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • compressed archive with quick access to individual file

    - by eric.frederich
    I need to come up with a file format for new application I am writing. This file will need to hold a bunch other text files which are mostly text but can be other formats as well. Naturally, a compressed tar file seems to fit the bill. The problem is that I want to be able to retrieve some data from the file very quickly and getting just a particular file from a tar.gz file seems to take longer than it should. I am assumeing that this is because it has to decompress the entire file even though I just want one. When I have just a regular uncompressed tar file I can get that data real quick. Lets say the file I need quickly is called data.dat For example the command... tar -x data.dat -zf myfile.tar.gz ... is what takes a lot longer than I'd like. MP3 files have id3 data and jpeg files have exif data that can be read in quickly without opening the entire file. I would like my data.dat file to be available in a similar way. I was thinking that I could leave it uncompressed and seperate from the rest of the files in myfile.tar.gz I could then create a tar file of data.dat and myfile.tar.gz and then hopefully that data would be able to be retrieved faster because it is at the head of outer tar file and is uncompressed. Does this sound right?... putting a compressed tar inside of a tar file? Basically, my need is to have an archive type of file with quick access to one particular file. Tar does this just fine, but I'd also like to have that data compressed and as soon as I do that, I no longer have quick access. Are there other archive formats that will give me that quick access I need? As a side note, this application will be written in Python. If the solution calls for a re-invention of the wheel with my own binary format I am familiar with C and would have no problem writing the Python module in C. Idealy I'd just use tar, dd, cat, gzip, etc though. Thanks, ~Eric

    Read the article

  • How to store multiple variables from a File Input of unknown size in Java?

    - by AlphaOmegaStrife
    I'm a total beginner with my first programming assignment in Java. For our programming assignment, we will be given a .txt file of students like so: 3 345 Lisa Miller 890238 Y 2 <-(Number of classes) Mathematics MTH345 4 A Physics PHY357 3 B Bill Wilton 798324 N 2 English ENG378 3 B Philosophy PHL534 3 A Dandy Goat 746333 Y 1 History HIS101 3 A" The teacher will give us a .txt file on the day of turning it in with a list of unknown students. My problem is: I have a specific class for turning the data from the file into variables to be used for a different class in printing it to the screen. However, I do not know of a good way to get the variables from the input file for the course numbers, since that number is not predetermined. The only way I can think of to iterate over that unknown amount is using a loop, but that would just overwrite my variables every time. Also, the teacher has requested that we not use any JCL classes (I don't really know what this means.) Sorry if I have done a poor job of explaining this, but I can't think of a better way to conceptualize it. Let me know if I can clarify. Edit: public static void analyzeData() { Scanner inputStream = null; try { inputStream = new Scanner(new FileInputStream("Programming Assignment 1 Data.txt")); } catch (FileNotFoundException e) { System.out.println("File Programming Assignment 1 Data.txt could not be found or opened."); System.exit(0); } int numberOfStudents = inputStream.nextInt(); int tuitionPerHour = inputStream.nextInt(); String firstName = inputStream.next(); String lastname = inputStream.next(); String isTuitionPaid = inputStream.next(); int numberOfCourses = inputStream.nextInt(); String courseName = inputStream.next(); String courseNumber = inputStream.next(); int creditHours = inputStream.nextInt(); String grade = inputStream.next(); To show the methods I am using now, I am just using a Scanner to read from the file and for Scanner inputStream, I am using nextInt() or next() to get variables from the file. Obviously this will not work when I do not know exactly how many classes each student will have.

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • Agile Awakenings and the Rules of Agile

    - by Robert May
    For those that care, you can read my history of management and technology to understand why I think I’m qualified to talk about this at all.  It’s boring, so feel free to skip it. Awakenings I first started to play around with the idea of “agile” in 2004 or 2005.  I found a book on the Rational Unified Process that I thought was good, and attempted to implement parts of it.  I thought I was agile, but really, it wasn’t.   I still didn’t understand the concept of a team.  I still wanted to tell the team what to do and how to get it done.  I still thought I was smarter than the team. After that job, I started work on another project and began helping that team.  The first few months were really rough.  We were implementing Scrum, which was relatively new to everyone on the team, and, quite frankly, I was doing a poor job of it.  I was trying to micro-manage every aspect of the teams work, and we were all miserable. The moment of change came when the senior architect bailed on the project.  His comment to me was: “This isn’t Agile.  Where are the stand-ups?  Where are the stories?”  He was dead on, and I finally woke up.  I finally realized that I was the problem!  I wasn’t trusting the team.  I wasn’t helping the team.  I was being a manager. Like many (most?), I was claiming to be Agile and use Scrum, but I wasn’t in fact following the rules Scrum.  Since then, I’ve done a lot of studying, hands on practice, coaching of many different teams, and other learning around Scrum, and I have discovered that Scrum has some rules that must be followed for success, even though the process is about continuous improvement. I’ve been practicing Scrum right for about 4 years now and have helped multiple teams implement it successfully, so what you’re about to get is based on experience, rather than just theory. The Rules of Scrum In my experience, what I’ve found is that most companies that claim to be doing Scrum or Agile are actually NOT doing either.  This stems largely because they think that they can “adopt the rules of Agile that fit their organization.”  Sadly, many of them think that this means they can adopt iterations (sprints) and not much else.  Either that, or they think they can do whatever they want, or were doing before, and call it Scrum.  This is simply not true. Here are some rules that must be followed for you to really be doing Scrum.  I’ll go into detail on each one of these posts in future blog posts and update links here.  My intent is that this will help other teams implementing scrum to see more success. Agile does not allow you to do whatever you want A Product Owner is required A ScrumMaster is required The team must function as a Team, and QA must be part of the team Support from upper management is required A prioritized product backlog is required A prioritized sprint backlog is required Release planning is required Complete spring planning is required Showcases are required Velocity must be measured Retrospectives are required Daily stand-ups are required Visibility is absolutely required For now, I think that’s enough, although I reserve the right to add more.  If you’re breaking any of these rules, you’re probably not doing Scrum.  There are exceptions to these rules, but until you have practiced Scrum for a while, you don’t know what those exceptions are. Breaking the Rules Many teams break these rules because they are the ones that expose the most pain.  Scrum is not Advil.  It’s not intended to mask the pain, its intended to cure it.  Let me explain that analogy a bit more.  Recently, my 7 year old son broke his arm, quite severely (see the X-Ray to the right).  That caused him a great deal of pain.  We went first to one doctor, and after viewing the X-Ray, they determined that there was no way that they’d cast the arm at their location.  It was simply too bad of a break for them to deal with.  They did, however, give him some Advil for the pain and put a splint on his arm to stabilize the broken bones.  Within minutes, he was feeling much better.  Had we been stupid, we could have gone home and he’d have been just as happy as ever . . . until the pain medication wore off or one of his siblings touched the splint.  Then, all of that pain would come right back to the top.  Sure, he could make it go away by just taking more Advil and moving the splint out of the way, but that wasn’t going to fix the problem permanently. We ended up in an emergency room with a doctor who could fix his arm.  However, we were warned that the fix was going to be VERY painful, and it was.  Even with heavy sedation (Propofol), my son was in enough pain that he squirmed and wiggled trying to get his arm away from the doctor.  He had to endure this pain in order to have a functional arm. But the setting wasn’t the end.  He had to have several casts, had to have it re-broken once, since the first setting didn’t take and finally was given a clean bill of health. Agile implementation is much like this story.  Agile was developed as a result of people recognizing that the development methodologies that were currently in place simply were ineffective.  However, the fix to the broken development that’s been festering for many years is not painless.  Many people start Agile thinking that things will be wonderful.  They won’t!  Agile is about visibility, and often, it brings great pain to surface.  It causes all of the missed deadlines, the cowboy coders, the coasters, the micro-managers, the lazy, and all of the other problems that are really part of your development process now to become painfully visible to EVERYONE.  Many people don’t like this exposure.  Agile will make the pain better, but not if you remove the cast (the rules above) prematurely and start breaking the rules that expose the most pain.  The healing will take time and is not instant (like Advil).  Figuring out what the true source of pain and fixing it is very valuable to you, your team, and your company.  Remember as you’re doing this that Agile isn’t the source of the pain, it’s really just exposing it.  Find the source. My recommendation is that ALL of these rules are followed for a minimum of six months, and preferably for an entire year, before you decide to break any of these rules.  Get a few good releases under your belt.  Figure out what your velocity is and start firing as a team.  Chances are, after you see agile really in action, you won’t want to break the rules because you’ll see their value. More Reading Jean Tabaka recently published a list of 78 Things I Have Learned in 6 Years of Agile Coaching.  Highly recommended. Technorati Tags: Agile,Scrum,Rules

    Read the article

  • Validating a linked item&rsquo;s data template in Sitecore

    - by Kyle Burns
    I’ve been doing quite a bit of work in Sitecore recently and last week I encountered a situation that it appears many others have hit.  I was working with a field that had been configured originally as a grouped droplink, but now needed to be updated to support additional levels of hierarchy in the folder structure.  If you’ve done any work in Sitecore that statement makes sense, but if not it may seem a bit cryptic.  Sitecore offers a number of different field types and a subset of these field types focus on providing links either to other items on the content tree or to content that is not stored in Sitecore.  In the case of the grouped droplink, the field is configured with a “root” folder and each direct descendant of this folder is considered to be a header for a grouping of other items and displayed in a dropdown.  A picture is worth a thousand words, so consider the following piece of a content tree: If I configure a grouped droplink field to use the “Current” folder as its datasource, the control that gets to my content author looks like this: This presents a nicely organized display and limits the user to selecting only the direct grandchildren of the folder root.  It also presents the limitation that struck as we were thinking through the content architecture and how it would hold up over time – the authors cannot further organize content under the root folder because of the structure required for the dropdown to work.  Over time, not allowing the hierarchy to go any deeper would prevent out authors from being able to organize their content in a way that it would be found when needed, so the grouped droplink data type was not going to fit the bill. I needed to look for an alternative data type that allowed for selection of a single item and limited my choices to descendants of a specific node on the content tree.  After looking at the options available for links in Sitecore and considering them against each other, one option stood out as nearly perfect – the droptree.  This field type stores its data identically to the droplink and allows for the selection of zero or one items under a specific node in the content tree.  By changing my data template to use droptree instead of grouped droplink, the author is now presented with the following when selecting a linked item: Sounds great, but a did say almost perfect – there’s still one flaw.  The code intended to display the linked item is expecting the selection to use a specific data template (or more precisely it makes certain assumptions about the fields that will be present), but the droptree does nothing to prevent the author from selecting a folder (since folders are items too) instead of one of the items contained within a folder.  I looked to see if anyone had already solved this problem.  I found many people discussing the problem, but the closest that I found to a solution was the statement “the best thing would probably be to create a custom validator” with no further discussion in regards to what this validator might look like.  I needed to create my own validator to ensure that the user had not selected a folder.  Since so many people had the same issue, I decided to make the validator as reusable as possible and share it here. The validator that I created inherits from StandardValidator.  In order to make the validator more intuitive to developers that are familiar with the TreeList controls in Sitecore, I chose to implement the following parameters: ExcludeTemplatesForSelection – serves as a “deny list”.  If the data template of the selected item is in this list it will not validate IncludeTemplatesForSelection – this can either be empty to indicate that any template not contained in the exclusion list is acceptable or it can contain the list of acceptable templates Now that I’ve explained the parameters and the purpose of the validator, I’ll let the code do the rest of the talking: 1: /// <summary> 2: /// Validates that a link field value meets template requirements 3: /// specified using the following parameters: 4: /// - ExcludeTemplatesForSelection: If present, the item being 5: /// based on an excluded template will cause validation to fail. 6: /// - IncludeTemplatesForSelection: If present, the item not being 7: /// based on an included template will cause validation to fail 8: /// 9: /// ExcludeTemplatesForSelection trumps IncludeTemplatesForSelection 10: /// if the same value appears in both lists. Lists are comma seperated 11: /// </summary> 12: [Serializable] 13: public class LinkItemTemplateValidator : StandardValidator 14: { 15: public LinkItemTemplateValidator() 16: { 17: } 18:   19: /// <summary> 20: /// Serialization constructor is required by the runtime 21: /// </summary> 22: /// <param name="info"></param> 23: /// <param name="context"></param> 24: public LinkItemTemplateValidator(SerializationInfo info, StreamingContext context) : base(info, context) { } 25:   26: /// <summary> 27: /// Returns whether the linked item meets the template 28: /// constraints specified in the parameters 29: /// </summary> 30: /// <returns> 31: /// The result of the evaluation. 32: /// </returns> 33: protected override ValidatorResult Evaluate() 34: { 35: if (string.IsNullOrWhiteSpace(ControlValidationValue)) 36: { 37: return ValidatorResult.Valid; // let "required" validation handle 38: } 39:   40: var excludeString = Parameters["ExcludeTemplatesForSelection"]; 41: var includeString = Parameters["IncludeTemplatesForSelection"]; 42: if (string.IsNullOrWhiteSpace(excludeString) && string.IsNullOrWhiteSpace(includeString)) 43: { 44: return ValidatorResult.Valid; // "allow anything" if no params 45: } 46:   47: Guid linkedItemGuid; 48: if (!Guid.TryParse(ControlValidationValue, out linkedItemGuid)) 49: { 50: return ValidatorResult.Valid; // probably put validator on wrong field 51: } 52:   53: var item = GetItem(); 54: var linkedItem = item.Database.GetItem(new ID(linkedItemGuid)); 55:   56: if (linkedItem == null) 57: { 58: return ValidatorResult.Valid; // this validator isn't for broken links 59: } 60:   61: var exclusionList = (excludeString ?? string.Empty).Split(','); 62: var inclusionList = (includeString ?? string.Empty).Split(','); 63:   64: if ((inclusionList.Length == 0 || inclusionList.Contains(linkedItem.TemplateName)) 65: && !exclusionList.Contains(linkedItem.TemplateName)) 66: { 67: return ValidatorResult.Valid; 68: } 69:   70: Text = GetText("The field \"{0}\" specifies an item which is based on template \"{1}\". This template is not valid for selection", GetFieldDisplayName(), linkedItem.TemplateName); 71:   72: return GetFailedResult(ValidatorResult.FatalError); 73: } 74:   75: protected override ValidatorResult GetMaxValidatorResult() 76: { 77: return ValidatorResult.FatalError; 78: } 79:   80: public override string Name 81: { 82: get { return @"LinkItemTemplateValidator"; } 83: } 84: }   In this blog entry, I have shared some code that I found useful in solving a problem that seemed fairly common.  Hopefully the next person that is looking for this answer finds it useful as well.

    Read the article

  • 202 blog articles

    - by mprove
    All my blog articles under blogs.oracle.com since August 2005: 202 blog articles Apr 2012 blogs.oracle.com design patch Mar 2012 Interaction 12 - Critique Mar 2012 Typing. Clicking. Dancing. Feb 2012 Desktop Mobility in Hospitals with Oracle VDI /video Feb 2012 Interaction 12 in Dublin - Highlights of Day 3 Feb 2012 Interaction 12 in Dublin - Highlights of Day 2 Feb 2012 Interaction 12 in Dublin - Highlights of Day 1 Feb 2012 Shit Interaction Designers Say Feb 2012 Tips'n'Tricks for WebCenter #3: How to display custom page titles in Spaces Jan 2012 Tips'n'Tricks for WebCenter #2: How to create an Admin menu in Spaces and save a lot of time Jan 2012 Tips'n'Tricks for WebCenter #1: How to apply custom resources in Spaces Jan 2012 Merry XMas and a Happy 2012! Dec 2011 One Year Oracle SocialChat - The Movie Nov 2011 Frank Ludolph's Last Working Day Nov 2011 Hans Rosling at TED Oct 2011 200 Countries x 200 Years Oct 2011 Blog Aggregation for Desktop Virtualization Oct 2011 Oracle VDI at OOW 2011 Sep 2011 Design for Conversations & Conversations for Design Sep 2011 All Oracle UX Blogs Aug 2011 Farewell Loriot Aug 2011 Oracle VDI 3.3 Overview Aug 2011 Sutherland's Closing Remarks at HyperKult Aug 2011 Surface and Subface Aug 2011 Back to Childhood in UI Design Jul 2011 The Art of Engineering and The Engineering of Art Jul 2011 Oracle VDI Seminar - June-30 Jun 2011 SGD White Paper May 2011 TEDxHamburg Live Feed May 2011 Oracle VDI in 3 Minutes May 2011 Space Ship Earth 2011 May 2011 blog moving times Apr 2011 Frozen tag cloud Apr 2011 Oracle: Hardware Software Complete in 1953 Apr 2011 Interaction Design with Wireframes Apr 2011 A guide to closing down a project Feb 2011 Oracle VDI 3.2.2 Jan 2011 free VDI charts Jan 2011 Sun Founders Panel 2006 Dec 2010 Sutherland on Leadership Dec 2010 SocialChat: Efficiency of E20 Dec 2010 ALWAYS ON Desktop Virtualization Nov 2010 12,000 Desktops at JavaOne Nov 2010 SocialChat on Sharing Best Practices Oct 2010 Globe of Visitors Oct 2010 SocialChat about the Next Big Thing Oct 2010 Oracle VDI UX Story - Wireframes Oct 2010 What's a PC anyway? Oct 2010 SocialChat on Getting Things Done Oct 2010 SocialChat on Infoglut Oct 2010 IT Twenty Twenty Oct 2010 Desktop Virtualization Webcasts from OOW Oct 2010 Oracle VDI 3.2 Overview Sep 2010 Blog Usability Top 7 Sep 2010 100 and counting Aug 2010 Oracle'izing the VDI Blogs Aug 2010 SocialChat on Apple Aug 2010 SocialChat on Video Conferencing Aug 2010 Oracle VDI 3.2 - Features and Screenshots Aug 2010 SocialChat: Don't stop making waves Aug 2010 SocialChat: Giving Back to the Community Aug 2010 SocialChat on Learning in Meetings Aug 2010 iPAD's Natural User Interface Jul 2010 Last day for Sun Microsystems GmbH Jun 2010 SirValUse Celebration Snippets Jun 2010 10 years SirValUse - Happy Birthday! Jun 2010 Wim on Virtualization May 2010 New Home for Oracle VDI Apr 2010 Renaissance Slide Sorter Comments Apr 2010 Unboxing Sun Ray 3 Plus Apr 2010 Desktop Virtualisierung mit Sun VDI 3.1 Apr 2010 Blog Relaunch Mar 2010 Social Messaging Slides from CeBIT Mar 2010 Social Messaging Talk at CeBIT Feb 2010 Welcome Oracle Jan 2010 My last presentation at Sun Jan 2010 Ivan Sutherland on Leadership Jan 2010 Learning French with Sun VDI Jan 2010 Learning Danish with Sun Ray Jan 2010 VDI workshop in Nieuwegein Jan 2010 Happy New Year 2010 Jan 2010 On Creating Slides Dec 2009 Best VDI Ever Nov 2009 How to store the Big Bang Nov 2009 Social Enterprise Tools. Beipiel Sun. Nov 2009 Nov-19 Nov 2009 PDF and ODF links on your blog Nov 2009 Q&A on VDI and MySQL Cluster Nov 2009 Zürich next week: Swiss Intranet Summit 09 Nov 2009 Designing for a Sustainable World - World Usabiltiy Day, Nov-12 Nov 2009 How to export a desktop from VDI 3 Nov 2009 Virtualisation Roadshow in the UK Nov 2009 Project Wonderland at EDUCAUSE 09 Nov 2009 VDI Roadshow in Dublin, Nov-26, 2009 Nov 2009 Sun VDI at EDUCAUSE 09 Nov 2009 Sun VDI 3.1 Architecture and New Features Oct 2009 VDI 3.1 is Early-Access Sep 2009 Virtualization for MySQL on VMware Sep 2009 Silpion & 13. Stock Sommerparty Sep 2009 Sun Ray and VMware View 3.1.1 2009-08-31 New Set of Sun Ray Status Icons 2009-08-25 Virtualizing the VDI Core? 2009-08-23 World Usability Day Hamburg 2009 - CfP 2009-07-16 Rising Sun 2009-07-15 featuring twittermeme 2009-06-19 ISC09 Student Party on June-20 /Hamburg 2009-06-18 Before and behind the curtain of JavaOne 2009-06-09 20k desktops at JavaOne 2009-06-01 sweet microblogging 2009-05-25 VDI 3 - Why you need 3 VDI hosts and what you can do about that? 2009-05-21 IA Konferenz 2009 2009-05-20 Sun VDI 3 UX Story - Power of the Web 2009-05-06 Planet of Sun and Oracle User Experience Design 2009-04-22 Sun VDI 3 UX Story - User Research 2009-04-08 Sun VDI 3 UX Story - Concept Workshops 2009-04-06 Localized documentation for Sun Ray Connector for VMware View Manager 1.1 2009-04-03 Sun VDI 3 Press Release 2009-03-25 Sun VDI 3 launches today! 2009-03-25 Sun Ray Connector for VMware View Manager 1.1 Update 2009-03-11 desktop virtualization wiki relaunch 2009-03-06 VDI 3 at CeBIT hall 6, booth E36 2009-03-02 Keyboard layout problems with Sun Ray Connector for VMware VDM 2009-02-23 wikis.sun.com tips & tricks 2009-02-23 Sun VDI 3 is in Early Access 2009-02-09 VirtualCenter unable to decrypt passwords 2009-02-02 Sun & VMware Desktop Training 2009-01-30 VDI at next09? 2009-01-16 Sun VDI: How to use virtual machines with multiple network adapters 2009-01-07 Sun Ray and VMware View 2009-01-07 Hamburg World Usability Day 2008 - Webcasts 2009-01-06 Sun Ray Connector for VMware VDM slides 2008-12-15 mother of all demos 2008-12-08 Build your own Thumper 2008-12-03 Troubleshooting Sun Ray Connector for VMware VDM 2008-12-02 My Roller Tag Cloud 2008-11-28 Sun Ray Connector: SSL connection to VDM 2008-11-25 Setting up SSL and Sun Ray Connector for VMware VDM 2008-11-13 Inspiration for Today and Tomorrow 2008-10-23 Sun Ray Connector for VMware VDM released 2008-10-14 From Sketchpad to ILoveSketch 2008-10-09 Desktop Virtualization on Xing 2008-10-06 User Experience Forum on Xing 2008-10-06 Sun Ray Connector for VMware VDM certified 2008-09-17 Virtual Clouds over Las Vegas 2008-09-14 Bill Verplank sketches metaphors 2008-09-04 End of Early Access - Sun Ray Connector for VMware 2008-08-27 Early Access: Sun Ray Connector for VMware Virtual Desktop Manager 2008-08-12 Sun Virtual Desktop Connector - Insides on Recycling Part 2 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling Part 3 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling 2008-07-20 lost in wiki space 2008-07-07 Evolution of the Desktop 2008-06-17 Virtual Desktop Webcast 2008-06-16 Woodstock 2008-06-16 What's a Desktop PC anyway? 2008-06-09 Virtual-T-Box 2008-06-05 Virtualization Glossary 2008-05-06 Five User Experience Principles 2008-04-25 Virtualization News Feed 2008-04-21 Acetylcholinesterase - Second Season 2008-04-18 Acetylcholinesterase - End of Signal 2007-12-31 Produkt-Management ist... 2007-10-22 Usability Verbände, Verteiler und Netzwerke. 2007-10-02 The Meaning is the Message 2007-09-28 Visualization Methods 2007-09-10 Inhouse und Open Source Projekte – Usability verankern und Synergien nutzen 2007-09-03 Der Schwabe Darth Vader entdeckt das Virale Marketing 2007-08-29 Dick Hardt 3.0 on Identity 2.0 2007-08-27 quality of written text depends on the tool 2007-07-27 podcasts for reboot9 2007-06-04 It is the user's itch that need to be scratched 2007-05-25 A duel at reboot9 2007-05-14 Taxonomien und Folksonomien - Tagging als neues HCI-Element 2007-05-10 Dueling Interaction Models of Personal-Computing and Web-Computing 2007-03-01 22.März: Weizenbaum. Rebel at Work. /Filmpremiere Hamburg 2007-02-25 Bruce Sterling at UbiComp 2006 /webcast 2006-11-12 FSOSS 2006 /webcasts 2006-11-10 Highway 101 2006-11-09 User Experience Roundtable Hamburg: EuroGEL 2006 2006-11-08 Douglas Adams' Hyperland (BBC 1990) 2006-10-08 Taxonomien und Folksonomien – Tagging als neues HCI-Element 2006-09-13 Usability im Unternehmen 2006-09-13 Doug does HyperScope 2006-08-26 TED Talks and TechTalks 2006-08-21 Kai Krause über seine Freundschaft zu Douglas Adams 2006-07-20 Rebel At Work: Film Portrait on Weizenbaum 2006-07-04 Gabriele Fischer, mp3 2006-06-07 Dick Hardt at ETech 06 2006-06-05 Weinberger: From Control to Conversation 2006-04-16 Eye Tracking at User Experience Roundtable Hamburg 2006-04-14 dropping knowledge 2006-04-09 GEL 2005 2006-03-13 slide photos of reboot7 2006-03-04 Dick Hardt on Identity 2.0 2006-02-28 User Experience Newsletter #13: Versioning 2006-02-03 Ester Dyson on Choice and Happyness 2006-02-02 Requirements-Engineering im Spannungsfeld von Individual- und Produktsoftware 2006-01-15 User Experience Newsletter #12: Intuition Quiz 2005-11-30 User Experience und Requirements-Engineering für Software-Projekte 2005-10-31 Ivan Sutherland on "Research and Fun" 2005-10-18 Ars Electronica / Mensch und Computer 2005 2005-09-14 60 Jahre nach Memex: Über die Unvereinbarkeit von Desktop- und Web-Paradigma 2005-08-31 reboot 7 2005-06-30

    Read the article

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • Best of Breed vs. Suite – Oracle’s SaaS Delivers Both

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} The debate of which is better: “best of breed” business applications vs. an integrated suite is certainly not a new conversation. This has been argued between IT vendors and CIOs for years. It’s also important to clarify that “best of breed” does not necessarily translate into being the richest functionality; rather it’s often about just having the best fit solution to solve a specific business problem or need. So what does cloud have to do with the niche vs. suite debate? Consuming business applications in a cloud or SaaS deployment model can change the best of breed vs. suite discussion - if the cloud is done right. It’s having your cake and eating it too only better: you don’t have to gather all the ingredients or wait to bake your cake, and you can adjust how big of slice you take. Before you eat, it’s worth pausing to recall much of what we learned about IT over the last decade. These basic IT principles still hold true even though the financial model has changed from buying to renting. In other words, what’s under the technology hood still matters. Architecture and development methodologies like building an application based on open standards so it works with other systems - is still important. Data and information silos, complex integrations, and proprietary technologies that lock you in, are still bad. While some may argue that IT no longer matters with cloud, the opposite is actually true. If anything cloud can help return IT back to its rightful place as key strategic asset vs. a liability on the balance sheet. The “I” in CIO was never meant to stand for “integration” yet it’s amazing how much time and money is poured into these types of initiatives for most organizations each year. Rather the “I” needs to stand for “innovation”. This is where Oracle SaaS can uniquely help. Oracle’s application strategy has not really changed over the years. It’s always been about bringing the best and richest functionality across the enterprise to our customers while leveraging a common, standards-based, and enterprise-grade platform. So not jut best fit, but the best capabilities based on the input of thousands of enterprise customers across the globe. Oracle invests billions in R&D every year to add new capabilities to the broadest cloud portfolio in the industry, spanning across functional pillars like CRM, HCM, ERP, etc. And where it makes sense, Oracle combines key strategic acquisitions to complement organic functionality. The result is best of breed delivered in a suite. Again this is not something new. The game changer now with cloud is that it impacts HOW Oracle customers adopt the richest, most modern applications across the business – and continue on getting it. Consuming oracle applications in the cloud means you can adopt new capabilities and updates very quickly and easily. There’s no hardware to buy or software to manage. Oracle does it for you. Low upfront costs and an OpEx financial model is the easy part. Oracle Cloud Applications take it a big step further. For organizations that demand having the latest and richest functionality and accelerating the time to value from their IT investment, Oracle Cloud is the right path. It’s about holistically changing the “hows” and the “whys” of the organization by leveraging transformational innovations like social, mobile, and big data in a consistent and more powerful way. Not just about sales force automation or talent management. These technologies should impact all parts of the company and Oracle Cloud is the enterprise-grade delivery vehicle. Oracle SaaS helps break down barriers of adoption and is eases the headache of upgrades, investing in new supporting hardware, or adding internal expertise to manage it all. With Oracle Cloud, customers can get best of breed capabilities in either a full suite model or a la carte. And because it’s entirely built on open standards, it’s built to co-exist with existing IT investments. Updates can be automatic or delayed based on a customer’s requirements. And it’s complete – a full suite of cross pillar functionality. Even better, if you don’t like it, need more or less, just turn the dial up or down. Just like your utility bill, you pay for what you use, and can consume more or less power whenever you need it. Lower cost, lower investment risk, without compromising on functionality, security, or performance. Technology still matters in the cloud. So our cloud customers also like that when they adopt our cloud applications, they also get the best underlying technology, from the middleware and database platform down to infrastructure and Oracle’s engineered systems. Therefore it’s not just the greatest and latest in application functionality, but everything underneath that makes it work is also the latest and greatest. The best of breed technology stack powering best of breed business applications, and all delivered in a subscription based model. The best of both worlds. Yep, that’s the idea.

    Read the article

  • My Thoughts On the Xbox 180

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/21/my-thoughts-on-the-xbox-180.aspx Everyone seems to be putting their 0.00237 cents into the wishing well over Microsoft's recent decision to reverse the DRM policy on the Xbox One. However, there have been a few issues that nobody has touched. As such, I have decided to dig 0.00237 cents out of my pocket. First, let me be clear about this point. I do not support the decision to reverse the DRM policy on the Xbox One. I wanted that point to be expressed first and unambiguously. I will say it again. I do not support the decision to reverse the DRM policy on the Xbox One. Now that I have that out of the way, let me go into my rationale. This decision removes most of the cool features that enticed me to pre-order the console. No, I didn't cancel my pre-order. There is still five months before the release of the console, and there is still a plethora of information that we, as consumers, do not have. With that, it should be noted that much of the talk in this post is speculation and rhetoric. I do not have any insider information that you do not possess. The persistent connection would have allowed the console to do many of the functions for which we have been begging. That demo where someone was playing Ryse, seamlessly accepted a multiplayer challenge in Killer Instinct, played the match (and a rematch,) and then jumped back into Ryse. That's gone, if you bought the game on disc. The new, DRM free system will require the disc in the system to play a game. That bullet point where one Xbox Live account could have up to 10 slave accounts so families could play together, no matter where they were located. That's gone as well. The promise of huge, expansive, dynamically changing worlds that was brought to us with the power of cloud computing. Well, "the people" didn't want there to be a forced, persistent connection. As such, developers can't rely on a connection and, as such, that feature is gone. This is akin to the removal of the hard drive on the Xbox 360. The list continues, but the enthusiast press has enumerated the list far better than I wish. All of this is because the Xbox team saw the HUGE success of Steam and decided to borrow a few ideas. Yes, Steam. The service that everyone hated for the first six months (for the same reasons the Xbox One is getting flack.) There was an initial growing pain. However, it is now lauded as the way games distribution should be handled. Unless you are Microsoft. I do find it curious that many of the features were originally announced for the PS4 during its unveiling. However, much of that was left strangely absent for Sony's E3 press conference. Instead, we received a single, static slide that basically said the exact opposite of Microsoft's plans. It is not farfetched to believe that slide came into existence during the approximately seven hours between the two media briefings. The thing that majorly annoys me over this whole kerfuffle is that the single thing that caused the call to arms is, really, not an issue. Microsoft never said they were going to block used sales. They said it was up to the publisher to make that decision. This would have allowed publishers to reclaim some of the costs of development in subsequent sales of the product. If you sell your game to GameStop for 7 USD, GameStop is going to sell it for 55 USD. That is 48 USD pure profit for them. Some publishers asked GameStop for a small cut. Was this a huge, money grubbing scheme? Well, yes, but the idea was that they have to handle server infrastructure for dormant accounts, etc. Of course, GameStop flatly refused, and the Online Pass was born. Fortunately, this trend didn’t last, and most publishers have stopped the practice. The ability to sell "licenses" has already begun to be challenged. Are you living in the EU? If so, companies must allow you to sell digital property. With this precedent in place, it's only a matter of time before other areas follow suit. If GameStop were smart, they should have immediately contacted every publisher out there to get the rights to become a clearing house for these licenses. Then, they keep their business model and could reduce their brick and mortar footprint. The digital landscape is changing. We need to not block this process. As Seth MacFarlane best said "Some issues are so important that you should drag people kicking and screaming." I believe this was said on an episode of Real Time with Bill Maher about the issue of Gay Marriages. Much like the original source, this is an issue that we need to drag people to the correct, progressive position. Microsoft, as a company, actually has the resources to weather the transition period. They have a great pool of first and second party developers that can leverage this new framework to prove the validity. Over time, the third party developers will get excited to use these tools. As an old C++ guy, I resisted C# for years. Now, I think it's one of the best languages I've ever used. I have a server room and a Co-Lo full of servers, so I originally didn't see the value in Azure. Now, I wish I could move every one of my projects into the cloud. I still LOVE getting physical packaging, which my music and games collection will proudly attest. However, I have started to see the value in pure digital, and have found ways to integrate this into the ways I consume those products. I can, honestly, understand how some parts of the population would be very apprehensive about this new landscape. There were valid arguments about people with no internet access. There are ways to combat these problems. These methods do not require us to throw the baby out with the bathwater. However, the number of people in the computer industry that I have seen cry foul is truly appalling. We are the forward looking people that help show how technology can improve people's lives. If we can't see the value of the brief pain involved with an exciting new ecosystem, than who will?

    Read the article

  • Thoughts on the Nomination Committee Campaign 2014

    - by Testas
    Congratulations to Erin, Andy and Allen on making the Nomination Committee for 2014. As Mark Broadbent (@retracement) stated in his tweet, there’s a great set of individuals for the Nom Com, and I could not agree more. I know Erin and Allen, and I know how much value they will bring to the process. I don’t know Andy as well, but I am sure he will do a great job and I hope I can meet him at PASS soon. The final candidate appointed by the PASS board is Rick Bolesta, who brings a wealth of experience to the process. I also want to take the opportunity to thank all who have voted. Not just for me, but for all the candidates during the election. Your contribution is greatly appreciated. Would I apply for the Nom Com again?  Yes I would. My first election experience has been a learning experience in itself. So I accept the result and look forward to applying next year. Moving on from this, I do want to express my opinion about the lack of international representation in the election process. One of the tweets that I saw after the result was from Adam Machanic (@AdamMachanic) who commented on the lack of international members on the Nom Com. If truth be told, I was disappointed – when the candidate list was released -- that for the second time in recent elections there was a lack of international candidates on the candidate list. It feels that only Brits and Americans partake in such elections. This is a real shame, and I can’t help thinking why this is the case. Hugo Kornelis (@Hugo_Kornelis) wrote a blog here to express his thoughts. He did raise some valid points. I don’t know why there is an absence of international candidates. I know that the team at PASS are looking to improve the situation, so I do not want to give the impression that PASS are doing nothing. For reference please see Bill Graziano’ s article here to see how PASS are addressing the situation. There is a clear direction to change the rules within PASS to give greater inclusion of international members. In addition to this, I wanted to explore a couple of potential approaches to address the situation. I am not saying that they are the right answer, but when I see challenges, I like to bring potential solutions to the table. 1.       Use the PASS mission statement to define a tactical objective that engages community leaders into the election process. If you are not familiar with the PASS mission statement, let me provide it here as laid out on the PASS website. “Empower data professionals who leverage Microsoft technologies to connect, share, and learn through networking, knowledge sharing, and peer-based learning” PASS fulfil this mission statement regularly. Whether you attend SQL Saturday, SQLRally, SQLPASS and BA conference itself. The biggest value of PASS is the ability to bring our profession together. And the 24 hour hop allows you to learn from the comfort of your own office/home. This mission should be extended to define a tactical objectives that bring greater networking and knowledge sharing between PASS Chapter leaders/Regional Mentors and PASS HQ. It should help educate the leaders about the opportunities of elections and how leaders can become involved. I know PASS engage with Chapter leaders on a regular basis to discuss community matters for the benefit of PASS members. How could this be achieved? Perhaps PASS could perform a quarterly virtual meeting that specifically looks at helping leaders become more involved with the election process 2.       Evolve the Global Growth Strategy into a Global Engagement Strategy. One of the remits of the PASS board over the last couple of years is the Global Growth strategy. This has been very successful as we have seen the massive growth of events across the world. For that, I congratulate the board for this success. Perhaps the time is now right to look at solidifying this success, through a Global Engagement Strategy that starts with the collaboration of Chapter Leaders, Regional Mentors and Evangelists in their respective Countries or Regions. The engagement strategy should look at increasing collaboration between community leaders for the benefit of their respective communities. It should also provide a channel for encouraging leaders to put themselves forward for the elections. How could this be achieved? In the UK, there has been a big growth in PASS Chapters and SQL Server Events that was approaching saturation point. The introduction of the Community Engagement Day -- channelled through the SQLBits conference -- has enabled Chapter Leaders to collaborate, connect and share with PASS, Sponsors and Microsoft. It also provides the ability for Chapter Leaders to speak directly to the PASS representatives from PASSHQ. This brings with it the ability for PASS community evangelists to communicate PASS objectives. It has also been the event where we have found out; and/or encouraged, Chapter Leaders to put themselves forward for elections. People like encouragement and validation when going for something like an election, and being able to discuss this with peers at a dedicated event provides a useful platform. PASS has the people in place already to facilitate such an event. Regional Mentors could potentially help organise such events on an annual basis, with PASSHQ providing support in providing a room/Lync access for the event to take place. It would be really good if a PASSHQ representative could attend in person as well.   3.       Restrict candidates to serve only a limited number of terms. A frequent comment I saw on social networking was that the elections can be seen by some as a popularity conference. Perhaps by limiting the number of terms that an individual can serve on either the Nom Com or the BOD, other candidates may be encouraged to be more actively involved within the PASS election process. I don’t think that the current byelaws deal with this particular suggestion. I also saw a couple of tweets that stated that more active community members did not apply for the Nom Com. I struggled to understand how the individuals of the tweets measured “more active”. It just also further solidified the subjective nature of elections. In the absence of how candidates are put forward for the elections. Then a restriction of terms enables the opportunity to be extended to others. How could this be achieved? Set a resolution that is put to a community vote as to the viability of such a solution. For example, the questions for the vote could be: Should individuals in the Nom Com and BoD be limited to a certain number of terms?  Yes/No. What is the maximum number of terms a candidate could serve?   It would be simple to execute such a vote, and the community will have an opportunity to have a say in an important aspect of the PASS organisation. And is the change is successful, then add it as a byelaw.   So there are some of my thoughts. I am not saying they are right or wrong. But I do hope that there is a concerted effort to encourage more candidates from other reaches of the Globe to become involved with future elections.   It would be good to hear your thoughts   Thanks   Chris

    Read the article

  • From J2EE to Java EE: what has changed?

    - by Bruno.Borges
    See original @Java_EE tweet on 29 May 2014 Yeap, it has been 8 years since the term J2EE was replaced, and still some people refer to it (mostly recruiters, luckily!). But then comes the question: what has changed besides the name? Our community friend Abhishek Gupta worked on this question and provided an excellent response titled "What's in a name? Java EE? J2EE?". But let me give you a few highlights here so you don't lose yourself with YATO (yet another tab opened): J2EE used to be an infrastructure and resources provider only, requiring developers to depend on external 3rd-party frameworks to then implement application requirements or improve productivity J2EE used to require hundreds of XML lines of codes to define just a dozen of resources like EJBs, MDBs, Servlets, and so on J2EE used to support only EAR (Enterprise Archives) with a bunch of other archives like JARs and WARs just to run a simple Web application And so on, and so on! It was a great technology but still required a lot of work to get something up and running. Remember xDoclet? Remember Struts? The old days of pure Hibernate code? Or when Ajax became a trending topic and we were all implementing it with DWR Servlet? Still, we J2EE developers survived, and learned, and helped evolve the platform to a whole new level of DX (Developer Experience). A new DX for J2EE suggested a new name. One that referred to the platform as the Enterprise Edition of Java, because "Java is why we're here" quoting Bill Shannon. The release of Java EE 5 included so many features that clearly showed developers the platform was going after all those DX gaps. Radical simplification of the persistence model with the introduction of JPA Support of Annotations following the launch of Java SE 5.0 Updated XML APIs with the introduction of StAX Drastic simplification of the EJB component model (with annotations!) Convention over Configuration and Dependency Injection A few bullets you may say but that represented a whole new DX and a vision for upcoming versions. Clearly, the release of Java EE 5 helped drive the future of the platform by reducing the number of XMLs, Java Interfaces, simplified configurations, provided convention-over-configuration, etc! We then saw the release of Java EE 6 with even more great features like Managed Beans, CDI, Bean Validation, improved JSP and Servlets APIs, JASPIC, the posisbility to deploy plain WARs and so many other improvements it is difficult to list in one sentence. And we've gotta give Spring Framework some credit here: thanks to Rod Johnson and team, concepts like Dependency Injection fit perfectly into the Java EE Platform. Clearly, Spring used to be one of the most inspiring frameworks for the Java EE platform, and it is great to see things like Pivotal and Spring supporting JSR 352 Batch API standard! Cooperation to keep improving DX at maximum in the server-side Java landscape.  The master piece result of these previous releases is seen and called today as Java EE 7, which by providing a newly and improved JavaServer Faces release, with new features for Web Development like WebSockets API, improved JAX-RS, and JSON-P, but also including Batch API and so many other great improvements, has increased developer productivity and brought innovation to server-side Java developers. Java EE is not just a new name (which was introduced back in May 2006!) but a new Developer Experience for server-side Java developers. To show you why we are here and where we are going (see the Java EE 8 update), we wanted to share with you a draft of the new Java EE logos that the evangelist team created, to help you spread the word about Java EE. You can get access to these images at the Java EE Platform Facebook Album, or the Google+ Java EE Platform Album whichever is better for you, but don't forget to like and/or +1 those social network profiles :-) A message to all job recruiters: stop using J2EE and start using Java EE if you want to find great Java EE 5, Java EE 6, or Java EE 7 developers To not only save you recruiter valuable characters when tweeting that job opportunity but to also match the correct term, we invite you to replace long terms like "Java/J2EE" or even worse "#Java #J2EE #JEE" or all these awkward combinations with the only acceptable hashtag: #JavaEE. And to prove that Java EE is catching among developers and even recruiters, and that J2EE is past, let me highlight here how are the jobs trends! The image below is from Indeed.com trends page, for the following keywords: J2EE, Java/J2EE, Java/JEE, JEE. As you can see, J2EE is indeed going away, while JEE saw some increase. Perhaps because some people are just lazy to type "Java" but at the same time they are aware that J2EE (the '2') is past. We shall forgive that for a while :-) Another proof that J2EE is going away is by looking at its trending statistics at Google. People have been showing less and less interest in the term J2EE. See the chart below:  Recruiter, if you still need proof that J2EE is past, that Java EE is trending, and that other job recruiters are seeking for Java EE developers, and that the developer community is aware of the new term, perhaps these other charts can show you what term you should be using. See for example the Job Trends for Java EE at Indeed.com and notice where it started... 2006! 8 years ago :-) Last but not least, the Google Trends for Java EE term (including the still wrong but forgivable JavaEE term) shows us that the new term is catching up very well. J2EE is past. Oh, and don't worry about the curves going down. We developers like to be hipsters sometimes and today only AngularJS, NodeJS, BigData are going up. Java EE and other traditional server-side technologies such as Spring, or even from other platforms such as Ruby on Rails, PHP, Grails, are pretty much consolidated and the curves... well, they are consolidated too. So If you are a Java EE developer, drop that J2EE from your résumé, and let recruiters also know that this term is past. Embrace Java EE, and enjoy a new developer experience for server-side Java developers. Java EE on TwitterJava EE on Google+Java EE on Facebook

    Read the article

  • The True Cost of a Solution

    - by D'Arcy Lussier
    I had a Twitter chat recently with someone suggesting Oracle and SQL Server were losing out to OSS (Open Source Software) in the enterprise due to their issues with scaling or being too generic (one size fits all). I challenged that a bit, as my experience with enterprise sized clients has been different – adverse to OSS but receptive to an established vendor. The response I got was: Found it easier to influence change by showing how X can’t solve our problems or X is extremely costly to scale. Money talks. I think this is definitely the right approach for anyone pitching an alternate or alien technology as part of a solution: identify the issue, identify the solution, then present pros and cons including a cost/benefit analysis. What can happen though is we get tunnel vision and don’t present a full view of the costs associated with a solution. An “Acura”te Example (I’m so clever…) This is my dream vehicle, a Crystal Black Pearl coloured Acura MDX with the SH-AWD package! We’re a family of 4 (5 if my daughters ever get their wish of adding a dog), and I’ve always wanted a luxury type of vehicle, so this is a perfect replacement in a few years when our Rav 4 has hit the 8 – 10 year mark. MSRP – $62,890 But as we all know, that’s not *really* the cost of the vehicle. There’s taxes and fees added on, there’s the extended warranty if I choose to purchase it, there’s the finance rate that needs to be factored in… MSRP –   $62,890 Taxes –      $7,546 Warranty - $2,500 SubTotal – $72,936 Finance Charge – $ 1094.04 Grand Total – $74,030 Well! Glad we did that exercise – we discovered an extra $11k added on to the MSRP! Well now we have our true price…or do we? Lifetime of the Vehicle I’m expecting to have this vehicle for 7 – 10 years. While the hard cost of the vehicle is known and dealt with, the costs to run and maintain the vehicle are on top of this. I did some research, and here’s what I’ve found: Fuel and Mileage Gas prices are high as it is for regular fuel, but getting into an MDX will require that I *only* purchase premium fuel, which comes at a premium price. I need to expect my bill at the pump to be higher. Comparing the MDX to my 2007 Rav4 also shows I’ll be gassing up more often. The Rav4 has a city MPG of 21, while the MDX plummets to 16! The MDX does have a bigger fuel tank though, so all in all the number of times I hit the pumps might even out. Still, I estimate I’ll be spending approximately $8000 – $10000 more on gas over a 10 year period than my current Rav4. Service Options Limited Although I have options with my Toyota here in Winnipeg (we have 4 Toyota dealerships), I do go to my original dealer for any service work. Still, I like the fact that I have options. However, there’s only one Acura dealership in all of Winnipeg! So if, for whatever reason, I’m not satisfied with the level of service I’m stuck. Non Warranty Service Work Also let’s not forget that there’s a bulk of work required every year that is *not* covered under warranty – oil changes, tire rotations, brake pads, etc. I expect I’ll need to get new tires at the 5 years mark as well, which can easily be $1200 – $1500 (I just paid $1000 for new tires for the Rav4 and we’re at the 5 year mark). Now these aren’t going to be *new* costs that I’m not used to from our existing vehicles, but they should still be factored in. I’d budget $500/year, or $5000 over the 10 years I’ll own the vehicle. Final Assessment So let’s re-assess the true cost of my dream MDX: MSRP                    $62,890 Taxes                       $7,546 Warranty                 $2,500 Finance Charge         $1094 Gas                        $10,000 Service Work            $5000 Grand Total           $89,030 So now I have a better idea of 10 year cost overall, and I’ve identified some concerns with local service availability. And there’s now much more to consider over the original $62,890 price tag. Tying This Back to Technology Solutions The process that we just went through is no different than what organizations do when considering implementing a new system, technology, or technology based solution, within their environments. It’s easy to tout the short term cost savings of particular product/platform/technology in a vacuum. But its when you consider the wider impact that the true cost comes into play. Let’s create a scenario: A company is not happy with its current data reporting suite. An employee suggests moving to an open source solution. The selling points are: - Because its open source its free - The organization would have access to the source code so they could alter it however they wished - It provided features not available with the current reporting suite At first this sounds great to the management and executive, but then they start asking some questions and uncover more information: - The OSS product is built on a technology not used anywhere within the organization - There are no vendors offering product support for the OSS product - The OSS product requires a specific server platform to operate on, one that’s not standard in the organization All of a sudden, the true cost of implementing this solution is starting to become clearer. The company might save money on licensing costs, but their training costs would increase significantly – developers would need to learn how to develop in the technology the OSS solution was built on, IT staff must learn how to set up and maintain a new server platform within their existing infrastructure, and if a problem was found there was no vendor to contact for support. The true cost of implementing a “free” OSS solution is actually spinning up a project to implement it within the organization – no small cost. And that’s just the short-term cost. Now the organization must ensure they maintain trained staff who can make changes to the OSS reporting solution and IT staff that will stay knowledgeable in the new server platform. If those skills are very niche, then higher labour costs could be incurred if those people are hard to find or if trained employees use that knowledge as leverage for higher pay. Maybe a vendor exists that will contract out support, but then there are those costs to consider as well. And let’s not forget end-user training – in our example, anyone that runs reports will need to be trained on how to use the new system. Here’s the Point We still tend to look at software in an “off the shelf” kind of way. It’s very easy to say “oh, this product is better than vendor x’s product – and its free because its OSS!” but the reality is that implementing any new technology within an organization has a cost regardless of the retail price of the product. Training, integration, support – these are real costs that impact an organization and span multiple departments. Whether you’re pitching an improved business process, a new system, or a new technology, you need to consider the bigger picture costs of implementation. What you define as success (in our example, having better reporting functionality) might not be what others define as success if implementing your solution causes them issues. A true enterprise solution needs to consider the entire enterprise.

    Read the article

  • Introducing… SharePress!

    - by Bil Simser
    For those that follow me I’ve been away from blogging and twittering for a couple of months. This is the reason. For the last few months I’ve been working with a cross-functional team putting together a new product from the people that run WordPress, the free premiere blogging platform. The result is a new product we call SharePress, a highly extensible blogging and content management platform with the usability of WordPress and the power of SharePoint combined into a single product. SharePress gives you SharePoint sites that are SEO-friendly delivered with a Web 2.0 ease of use, leveraging all of the existing abilities of SharePoint and WordPress that we know today. The Reason Back in December I was approached by the WordPress team about building a new platform that took advantage of the power of SharePoint but the ease of WordPress. I’m no stranger to WordPress and it’s 5 minute no-holds-barred install (I’ve always wanted SharePoint to do this!) and I run my personal blog on WordPress as does my better half, Princess Jenn. There’s always been a pitch by so-called Web 2.0 applications to deliver the power of SharePoint but the ease of [insert product here] over the past year or so. I checked each and every one of them out, but they fell woefully short when it came to SharePoint’s document management, versioning, and customization. They try, but it’s never been up to par in my books. On the flipside, SharePoint has always been tops in collaboration in the Enterprise but it’s painful to develop web parts, UI customization can be tricky, and there’s just no user community for something as simple as themes and designs. The Product Enter SharePress. Is it SharePoint? Is it WordPress? It’s both, and neither. Everything you like about both products are there but this is a bold new product that is positioned to bring SharePoint to the masses while maintaining the fidelity of an Enterprise 2.0 collaboration platform. SharePress delivers on all fronts including: The ability to leverage any WordPress/Joomla/Drupal/DotNetNuke themes and skins inside of SharePoint Run any WordPress/Drupal/Joomla/DotNetNuke/SharePoint plug-in/module/web part/feature works out of the box with SharePress SEO-friendly URLs and pages Permalinks for all content All the features of SharePoint Server 2010 (including InfoPath, Excel, and Access services) included in the price Small deployment footprint. You decide how much to deploy and where. Independent Database Abstraction Layer (iDal) that allows you to deploy to SQL Server 2005/2008, MySQL, and PostgreSQL Portable Rendering Engine Layer (PREL) so you host .NET or PHP on Apache or IIS (version 7 or higher). The install feature is built around WordPress and it’s famous 5-minute install (actually, it’s never taken me more than 1 minute). SharePress installs with two screens after the files are uploaded to your server (which can be done entirely using FTP): After you enter two fields of information click “Install SharePress” and you’ll be done: No mess, no fuss, no complicated dependencies, and no server access required! How simpler could this be? The Technology WordPress plug-ins and themes working with SharePoint? Of course! The answer is IronPython which has now reached a maturity level capable of doing on the fly code language conversions. SharePress is a brand new product not built on top of any previous platform but leverages all the power of each of those applications through a patent pending technique called SharePress Multi-plAtfoRm Technology (SMART). SMART will convert PHP code on the fly into Python (using SWIG as an intermediate processor) which is then compiled to MSIL and then delivered back as an ASP.NET MVC application (output is C# or VB.NET, but you can build your own SMART converter to output a different language). Sound complicated? It is, but it’s all behind the scenes and you don’t have to worry about a thing. This image illustrates the technology stack and process: So users can load up out of the box PHP themes and plug-ins from the WordPress/Joomla/Drupal community into the SMART converter and output MSIL that is used by the SharePress engine and rendered on the fly to the end user. Supported PHP versions are 4.xx and 5.xx with version 6 support to come when it’s released. Similarly you can take any .NET application, DotNetNuke Module, SharePoint Web Part or event handler and feed it into the converter to output the same. Everything is reverse compiled into MSIL so it becomes technology agnostic. No source code access is needed and the SMART converter can handle obfuscated .NET assemblies that were built with .NET 1.0, 1.1, 2.0, 3.5, and 4.0. With this technology you can also with the flip of a switch have the output create PHP pages for you. This allows you to run SharePress on Unix based systems running PHP and MySQL, allowing you to deliver your SharePoint like experience to your users with a $0 infrastructure footprint. Here’s SharePress with the default WordPress post imported then a stock SharePoint collaboration site was imported. The site was then applied with the default Kubrick theme from WordPress. The Features Deploy any of the freely available 100,000 WordPress/Joomla/Drupal themes instantly to your runtime SharePress environment and preview or activate them right from your browser. Built-in Web 2.0 jQuery Enabled End User and Administrator Web Interface. Never have to remote into a server again! Run any SharePoint Web Part or Event Handler directly without modification or access to source code in SharePress. Use any WordPress/Joomla/Drupal plug-in directly in SharePress, no local admin or access to server. Just upload and activate. Upload and Activate any SharePoint Solution Package to any site remotely. No rebuilding. Changes made to sites require no compiling or rebuilding and are published immediately. Password Protected Content. You can give passwords to individual posts, articles, pages, documents, forms, and list items. A powerful polymorphic Captcha system backs the security interface and vendors can easily tie into smart card readers, fingerprint readers, and retina scanners for authorization and identification. OpenID, Windows Live, and Windows Authentication are supported out of the box. Infinitely customizable and extensible. You can leverage plug-ins from the open source community to do practically anything, all configured and uploaded via the browser. Additionally the developer API (available soon) allows you to build extensions in .NET, PHP, and Python with little effort. Easy Importing. We have importers for Blogger, WordPress, Drupal, Joomla, DotNetNuke, and SharePoint so you can populate your site quickly and easily with full metadata modeling and creation. Banner Management. It’s easy to setup banners for your web site complete with impression numbers, special URLs, and more. Menu Manager. The Menu Manager allows you to create as many menus as you want, each one can be associated to specific audiences or roles and then be styled across multiple contexts including the same menu delivered as a fly out, rollover, drop down, and just about any navigation you can think of. Collaborative ShareBook. Our exclusive book feature allows you to setup a “book” and then authorize individuals to contribute content. Permalinks. All content in SharePress has a permanent or “perma link” associated with it so people can link to it freely without fear of broken links. Apache or IIS, Unix / Linux / BSD / Solaris / Windows / Mac OS X support. Deliver SharePress the way *you* want from the platform *you* decide. Database Independence. We know people wanted to run on any database platform so SharePress is built on top of a database abstraction layer that allows you to run on SQL Server, MySQL, PostgreSQL. Other databases can be supported by writing a supporting database script consisting of fourteen function calls. The script can be written in Perl, Python, AWK, PowerShell, Unix Shell scripts, VBA, or simple DOS batch files. The Team SharePress is the work of a lot of people in both the WordPress and SharePoint community. I worked with a lot of SharePoint MVPs to create this new product as we really wanted to deliver the most compatible and feature rich system in a product that we would be proud of. Many thanks go out to Eli Bleeker, Todd Robillard, Scot Larson, Daniel Hillier, Shane Fox, Box Peran, Amanda English, and Bill Murray for doing the heavy lifting and all of their expertise and innovative thinking to get this product out. Licensing and Pricing SharePress is still in the final stages for pricing but we’re looking at a price point somewhere between $99-$100 to make it affordable for everyone. We plan to announce final pricing sometime in the next few weeks. There are no additional charges for Enterprise versions or additional features. Everything you see is what’s available and it’s just a matter of lighting up your site with whatever feature you want to enable. The product will not be open source but source code licenses will be available to ISVs who are interested in interfacing with the API at a low level. Cost will be $25,000 USD per developer and gives you complete access to the source code to the SharePress Foundation System and the .NET 4.0 Framework source code. Conclusion We hope you enjoy the launch of SharePress as the new premium blogging and content management platform for both Intranets and the Internet. We think we’ve build the best of breed solutions here and made it easy for anyone to get started with a minimal of infrastructure but allow the scalability of SharePress to shine through in the Enterprise 2.0 world. We encourage your feedback so please leave comments as to what you’re looking for in this system as we’re always evolving it to make it a better product for everyone.

    Read the article

  • Building services with .Net Part 1

    - by Allan Rwakatungu
    On the 26th of May 2010 , I made a presentation to the .NET user group meeting (thanks to Malisa Ncube for organizing this event every month … ). If you missed my presentation , we talked about why we should all be building services … better still using the .NET framework. This blog post is an introduction to services , why you would want to build services and how you can build services using the .NET framework. What is a service? OASIS defines service as "a mechanism to enable access to one or more capabilities, where the access is provided using a prescribed interface and is exercised consistent with constraints and policies as specified by the service description." [1]. If the above definition sounds to academic , you can also define a service as loosely coupled units of functionality that have no calls to each other embedded in the. Instead of services embedding calls to each other in their service code they use defined protocols that describe how services pass and parse messages. This is a good way to think about services if you’re from an objected oriented background. While in object oriented programming functions make calls to each other, in service oriented programming, functions pass messages between each other. Why would you want to use services? 1. If your enterprise architecture looks like this   Services are the building blocks for SOA . With SOA you can move away from the sphaggetti infrastructure that is common in most enterprises. The complexity or lack of visibility of the integration points in your enterprises makes it difficult and costly to implement new initiatives and changes into the business - and even impossible in some cases - as it is not possible to identify the impact a change in one system might have to other systems. With services you can move to an architecture like this Your building blocks from Spaghetti infrastructure to something that is more well-defined and manageable to achieve cost efficiency and not least business agility - enabling you to react to changes in the market with speed and achieve operational efficiency and control are services. 2. If you want to become the Gates or Zuckerburger. Have you heard about Web 2.0 ? Mashups? Software as a service (SAAS) ? Cloud computing ?   They all offer you the opportunity to have scalable but low cost business models and they built using services.  Some of my favorite companies that leverage services for their business models include  https://www.salesforce.com/ (cloud CRM) http://www. twitter.com (more people use twitter clients built by 3rd parties than their official clients) http://www.kayak.com/ (compares data from other travel sites to give information to users in one location) Services with the .NET framework      If you are a .NET developer and you want to develop services, Windows Communication Framework (WCF) is the tool for you. WCF is Microsoft’s unified programming model (service model) for building service oriented applications. ( Before .NET 3.0 you had several models for programming services in .NET including .NET remoting, Web services (ASMX), COM +, Microsoft Messaging queuing (MSMQ) etc, after .NET 3.0 the programming model was unified into one i.e. WCF ). Windows Communication Framework (WCF) provides you 1. An Software Development Kit (SDK) for creating SOA applications 2. A runtime for running services on the Windows platform Why should you use Windows Communication Foundation if you’re programming services?   1. It supports interoperable and open standards e.g. WS* protocols for programming SOAP services 2. It has a unified programming model. Whether you use TCP or Http or Pipes or transmitting using Messaging Queues, programmers need to learn just one way to program. Previously you had .NET remoting, MSMQ, Web services, COM+ and they were all done differently 3. Productive programming model You don’t have to worry about all the plumbing involved to write services. You have a rich declarative programming model to add stuff like logging, transactions, and reliable messages in-built in the Windows Communication Framework. Understanding services in WCF The basic principles of WCF are as easy as ABC A – Address This is where the service is located B- Binding This describes how you communicate with the service e.g. Use TCP, HTTP or both. How to exchange security information with the service etc. C – Contract This defines what the service can do. E.g. Pay water bill, Make a phone call A - Addresses In WCF, an address is a combination of transport, server name, port and path Example addresses may include http://localhost:8001 net.tcp://localhost:8002/MyService net.pipe://localhost/MyPipe net.msmq://localhost/private/MyService net.msmq://localhost/MyService B- Binding   There are numerous ways to communicate with services , different ways that a message can be formatted/sent/secured, that allows you to tailor your service for the compatibility/performance you require for your solution. Transport You can use HTTP TCP MSMQ , Named pipes, Your own custom transport etc Message You  can send a plain text binary, Message Transmission Optimization Mechanism (MTOM) message Communication security No security Transport security Message security Authenticating and authorizing callers etc Behaviour You service can support Transactions Be reliable Use queues Support ajax etc C - Contract You define what your service can do using Service contracts :- Define operations that your service can do, communications and behaviours Data contracts :- Define the messages that are passed from and into your service and how they are formatted Fault contracts :- Defines errors types in your service   As an example, suppose your service service shows money. You define your service contract using a interface [ServiceContract] public interface IShowMeTheMoney {   [OperationContract]    Money Show(); } You define the data contract by annotating a class it with the Data Contract attribute and fields you want to pass in the message as Data Members. (Note:- In the latest versions of WCF you dont have to use attributes if you passing all the objects properties in the message) [DataContract] public Money {   [DataMember]   public string Currency { get; set; }   [DataMember]   public Decimal Amount { get; set; }   public string Comment { get; set; } } Features of Windows Communication Foundation Windows Communication Foundation is not only simple but feature rich , offering you several options to tweak your service to fit your business requirements. Some of the features of WCF include 1. Workflow services You can combine WCF with Windows WorkFlow Foundation (WWF) to write workflow type services 2. Control how your data (messages) are transferred and serialized e.g. you can serialize your business objects as XML or binary 3. control over session management , instance creation and concurrency management without writing code if you like 4. Queues and reliable sessions. You can store messages from the sending client and later forward them to the receiving application. You can also guarantee that messages will arrive at their destincation. 5.Transactions:  You can have different services participate in a transaction operations that can be rolled back if needed 6. Security. WCF has rich features for authorization and authentication  as well as keep audit trails 7. Web programming model. WCF allows developers to expose services as non SOAP endpoints 8. Inbuilt features that you can use to write JSON and services that support AJAX applications And lots more In my next blog I will show you how you can use WCF features to write a real world business service.               Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 ]] /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 38 39 40 41 42 43 44  | Next Page >