Search Results

Search found 32429 results on 1298 pages for 'project layout'.

Page 619/1298 | < Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >

  • Important Tips to make a user friendly Homepage

    - by Aditi
    We have done a lot of redesign work lately for many online businesses.. There are some basic things we want to stress into any web design which make it usable! Unfortunately most designers care about the graphic elements, clean code & navigation only. Most designers do not realize that a design they did could be a masterpiece to them but for a layman it could be tough to use. It is so very important to understand usability & call to action for any business and then implemented on their website Homepage. Showcase Offering The sole purpose of their websites needs to be mentioned on the homepage, what they are offering, why should one do business with them and why they are better than other competitors. Include a tag line under your logo that explicitly summarizes what the site or company does. Ease of Navigation Make it easier for your user to find what they are looking for, some archives, articles, services etc. A visible and easy to use Navigation allows just that. You may also want to feature some of the most visited content on the homepage itself. Search Form Let your users take advantage of a search form from your homepage, keep it visible enough so if they have trouble locating certain content using your navigation. The Search Bar will come handy in such situation. Especially if you have thousands and thousands of web pages. Liquid Layout There was a time when people used standard resolution, not any more. People have different screen sizes and most people now browse on handhelds. Try keeping liquid width so people can adjust as per their choice. First impression is last impression, leaving one through your perfect homepage can do wonders for your business.

    Read the article

  • problem with network-manager-pptp

    - by Riuzaki90
    I've a problema with the VPA CAble connection of my university... on the website of the university there's a .sh file that set all the variables of the connection in ETC/PPP/PEERS and another .sh file that call the connection...I'm on ubuntu 11.10 and when I run the setup.sh I have this error: impossible to find network-manager-pptp these are the two file that I had talk about: #!/bin/bash echo "Creazione della connessione in corso attendere........." apt-get update apt-get install pptp-linux network-manager-pptp echo -n "Digitare la propria Username: " read USERNAME echo -n "Digitare la propria Password: " read PASSWORD pptpsetup --create UNICAL_Campus_Access --server 160.97.73.253 --username $USERNAME --password $PASSWORD echo 'pty "pptp 160.97.73.253 --nolaunchpppd"' >/etc/ppp/peers/UNICAL_Campus_Access echo 'require-mppe-128' >>/etc/ppp/peers/UNICAL_Campus_Access echo 'file /etc/ppp/options.pptp'>>/etc/ppp/peers/UNICAL_Campus_Access echo 'name '$USERNAME''>>/etc/ppp/peers/UNICAL_Campus_Access echo 'remotename PPTP'>>/etc/ppp/peers/UNICAL_Campus_Access echo 'ipparam UNICAL_Campus_Access'>>/etc/ppp/peers/UNICAL_Campus_Access echo $USERNAME' PPTP '$PASSWORD' *'>>/etc/ppp/chap-secrets rm /etc/ppp/options.pptp echo '###############################################################################'>/etc/ppp/options.pptp echo '# $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $'>>/etc/ppp/options.pptp echo '#'>>/etc/ppp/options.pptp echo '# Sample PPTP PPP options file /etc/ppp/options.pptp'>>/etc/ppp/options.pptp echo '# Options used by PPP when a connection is made by a PPTP client.'>>/etc/ppp/options.pptp echo '# This file can be referred to by an /etc/ppp/peers file for the tunnel.'>>/etc/ppp/options.pptp echo '# Changes are effective on the next connection. See "man pppd".'>>/etc/ppp/options.pptp echo '#'>>/etc/ppp/options.pptp echo '# You are expected to change this file to suit your system. As'>>/etc/ppp/options.pptp echo '# packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/'>>/etc/ppp/options.pptp echo '# and the kernel MPPE module available from the CVS repository also on'>>/etc/ppp/options.pptp echo '# http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe.'>>/etc/ppp/options.pptp echo '###############################################################################'>>/etc/ppp/options.pptp echo '# Lock the port'>>/etc/ppp/options.pptp echo 'lock'>>/etc/ppp/options.pptp echo '# Authentication'>>/etc/ppp/options.pptp echo '# We do not need the tunnel server to authenticate itself'>>/etc/ppp/options.pptp echo 'noauth'>>/etc/ppp/options.pptp echo '#We won"t do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2'>>/etc/ppp/options.pptp echo '#(you may need to remove these refusals if the server is not using MPPE)'>>/etc/ppp/options.pptp echo 'refuse-pap'>>/etc/ppp/options.pptp echo 'refuse-eap'>>/etc/ppp/options.pptp echo 'refuse-chap'>>/etc/ppp/options.pptp echo 'refuse-mschap'>>/etc/ppp/options.pptp echo '# Compression Turn off compression protocols we know won"t be used'>>/etc/ppp/options.pptp echo 'nobsdcomp'>>/etc/ppp/options.pptp echo 'nodeflate'>>/etc/ppp/options.pptp echo '# Encryption'>>/etc/ppp/options.pptp echo '# (There have been multiple versions of PPP with encryption support,'>>/etc/ppp/options.pptp echo '# choose with of the following sections you will use. Note that MPPE'>>/etc/ppp/options.pptp echo '# requires the use of MSCHAP-V2 during authentication)'>>/etc/ppp/options.pptp echo '# http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras'>>/etc/ppp/options.pptp echo '# ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o'>>/etc/ppp/options.pptp echo '#{{{'>>/etc/ppp/options.pptp echo '# Require MPPE 128-bit encryption'>>/etc/ppp/options.pptp echo '#require-mppe-128'>>/etc/ppp/options.pptp echo '#}}}'>>/etc/ppp/options.pptp echo '# http://polbox.com/h/hs001/ fork from PPP project by Jan Dubiec'>>/etc/ppp/options.pptp echo '#ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o'>>/etc/ppp/options.pptp echo '#{{{'>>/etc/ppp/options.pptp echo '# Require MPPE 128-bit encryption'>>/etc/ppp/options.pptp echo '#mppe required,stateless'>>/etc/ppp/options.pptp echo '# }}}'>>/etc/ppp/options.pptp echo "setup di 'UNICAL Campus Access' terminato correttamente" echo "per connettersi eseguire lo script 'UNICAL_Campus_Access.sh' " and the second: #!/bin/bash echo "Connessione alla Rete del Centro Residenziale in corso attendere........." modprobe ppp_mppe pppd call UNICAL_Campus_Access sleep 30 tail -n 8 /var/log/messages echo "Connessione Stabilita" echo -n "Per terminare la connessione premere invio (in alternativa eseguire il commando 'killall pppd'):----> " read CONN killall pppd echo "Connessione terminata" I've correctly installed network-manager-pptp to the latest version...help?

    Read the article

  • Metro: Dynamically Switching Templates with a WinJS ListView

    - by Stephen.Walther
    Imagine that you want to display a list of products using the WinJS ListView control. Imagine, furthermore, that you want to use different templates to display different products. In particular, when a product is on sale, you want to display the product using a special “On Sale” template. In this blog entry, I explain how you can switch templates dynamically when displaying items with a ListView control. In other words, you learn how to use more than one template when displaying items with a ListView control. Creating the Data Source Let’s start by creating the data source for the ListView. Nothing special here – our data source is a list of products. Two of the products, Oranges and Apples, are on sale. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99, onSale: true }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44, onSale: true }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The file above is saved with the name products.js and referenced by the default.html page described below. Declaring the Templates and ListView Control Next, we need to declare the ListView control and the two Template controls which we will use to display template items. The markup below appears in the default.html file: <!-- Templates --> <div id="productItemTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productOnSaleTemplate" data-win-control="WinJS.Binding.Template"> <div class="product onSale"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> </div> <!-- ListView --> <div id="productsListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: ListViewDemos.products.dataSource, layout: { type: WinJS.UI.ListLayout } }"> </div> In the markup above, two Template controls are declared. The first template is used when rendering a normal product and the second template is used when rendering a product which is on sale. The second template, unlike the first template, includes the text “(On Sale!)”. The ListView control is bound to the data source which we created in the previous section. The ListView itemDataSource property is set to the value ListViewDemos.products.dataSource. Notice that we do not set the ListView itemTemplate property. We set this property in the default.js file. Switching Between Templates All of the magic happens in the default.js file. The default.js file contains the JavaScript code used to switch templates dynamically. Here’s the entire contents of the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll().then(function () { var productsListView = document.getElementById("productsListView"); productsListView.winControl.itemTemplate = itemTemplateFunction; });; } }; function itemTemplateFunction(itemPromise) { return itemPromise.then(function (item) { // Select either normal product template or on sale template var itemTemplate = document.getElementById("productItemTemplate"); if (item.data.onSale) { itemTemplate = document.getElementById("productOnSaleTemplate"); }; // Render selected template to DIV container var container = document.createElement("div"); itemTemplate.winControl.render(item.data, container); return container; }); } app.start(); })(); In the code above, a function is assigned to the ListView itemTemplate property with the following line of code: productsListView.winControl.itemTemplate = itemTemplateFunction;   The itemTemplateFunction returns a DOM element which is used for the template item. Depending on the value of the product onSale property, the DOM element is generated from either the productItemTemplate or the productOnSaleTemplate template. Using Binding Converters instead of Multiple Templates In the previous sections, I explained how you can use different templates to render normal products and on sale products. There is an alternative approach to displaying different markup for normal products and on sale products. Instead of creating two templates, you can create a single template which contains separate DIV elements for a normal product and an on sale product. The following default.html file contains a single item template and a ListView control bound to the template. <!-- Template --> <div id="productItemTemplate" data-win-control="WinJS.Binding.Template"> <div class="product" data-win-bind="style.display: onSale ListViewDemos.displayNormalProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> <div class="product onSale" data-win-bind="style.display: onSale ListViewDemos.displayOnSaleProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> </div> <!-- ListView --> <div id="productsListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: ListViewDemos.products.dataSource, itemTemplate: select('#productItemTemplate'), layout: { type: WinJS.UI.ListLayout } }"> </div> The first DIV element is used to render a normal product: <div class="product" data-win-bind="style.display: onSale ListViewDemos.displayNormalProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> The second DIV element is used to render an “on sale” product: <div class="product onSale" data-win-bind="style.display: onSale ListViewDemos.displayOnSaleProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> Notice that both templates include a data-win-bind attribute. These data-win-bind attributes are used to show the “normal” template when a product is not on sale and show the “on sale” template when a product is on sale. These attributes set the Cascading Style Sheet display attribute to either “none” or “block”. The data-win-bind attributes take advantage of binding converters. The binding converters are defined in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll(); } }; WinJS.Namespace.define("ListViewDemos", { displayNormalProduct: WinJS.Binding.converter(function (onSale) { return onSale ? "none" : "block"; }), displayOnSaleProduct: WinJS.Binding.converter(function (onSale) { return onSale ? "block" : "none"; }) }); app.start(); })(); The ListViewDemos.displayNormalProduct binding converter converts the value true or false to the value “none” or “block”. The ListViewDemos.displayOnSaleProduct binding converter does the opposite; it converts the value true or false to the value “block” or “none” (Sadly, you cannot simply place a NOT operator before the onSale property in the binding expression – you need to create both converters). The end result is that you can display different markup depending on the value of the product onSale property. Either the contents of the first or second DIV element are displayed: Summary In this blog entry, I’ve explored two approaches to displaying different markup in a ListView depending on the value of a data item property. The bulk of this blog entry was devoted to explaining how you can assign a function to the ListView itemTemplate property which returns different templates. We created both a productItemTemplate and productOnSaleTemplate and displayed both templates with the same ListView control. We also discussed how you can create a single template and display different markup by using binding converters. The binding converters are used to set a DIV element’s display property to either “none” or “block”. We created a binding converter which displays normal products and a binding converter which displays “on sale” products.

    Read the article

  • Mod Rewrite - directing HTTP/HTTPS traffic to the appropriate virtual hosts

    - by kce
    I have an Apache2 web server (v. 2.2.16) running on Debian hosting three virtual hosts. The first two hosts are HTTP only (server1 and server2). The last host is HTTPS only (server3). My virtual host configuration files can be found at pastebin. I would like to use mod rewrite to get the following behavior: Any request for http://server3 is re-directed to https://server3 Any request for either https://server1 or https://server2 is re-directed to http://server1 or http://server2 as appropriate. Currently, requesting http://server3 gives you a 403 because indexing is disabled for that host and a request for https://server1 or https://server2 will resolve as https://server3 (as its the only virtual host running SSL). This behavior is not desirable. So far I have added a rewrite rule to the central configuration file (myServerWideConfs.conf), with unfortunately no effect. I was under the impression that this rule (or something similar) should rewrite all https:// requests for server1 and server2 to the proper http:// request. RewriteEngine On RewriteCond %{HTTP_HOST} !^server3 [NC] RewriteRule (.*) http://%{HTTP_HOST} My question is two-fold: What mod rewrite rules should I use to accomplish this? And where should they go? Debian's packaging of Apache has a pretty granular (i.e., fractured) configuration file layout; should my rewrite rules go in /etc/apache2/apache2.conf, /etc/apache2/conf.d/myServerWideConfs.conf, or the individual virtual host files? Is mod rewrite the right tool to accomplish this or am I missing something in my greater apache configuration?

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Part 9: EBS Customizations, how to track

    - by volker.eckardt(at)oracle.com
    In the previous blogs we were concentrating on the preparation tasks. We have defined standards, we know about the tools and techniques we will start with. Additionally, we have defined the modification strategy, and how to handle such topics best. Now we are ready to take the requirements! Such requirements coming over in spreadsheets, word files (like GAP documents), or in any other format. As we have to assign some attributes, we start numbering all that and assign a short name to each of these requirements (=CEMLI reference). We may also have already a Functional person assigned, and we might involve someone from the tech team to estimate, and we like to assign a status such as 'planned', 'estimated' etc. All these data are usually kept in spreadsheets, but I would put them into a database (yes, I am from Oracle :). If you don't have any good looking and centralized application already, please give a try with Oracle APEX. It should be up and running in a day and the imported sheets are than manageable concurrently!  For one of my clients I have created this CEMLI-DB; in between enriched with a lot of additional functionality, but initially it was just a simple centralized CEMLI tracking application. Why I am pointing out again the centralized method to manage such data? Well, your data quality will dramatically increase, if you let your project members see (also review and update) "your" data.  APEX allows you to filter, sort, print, and also export. And if you can spend some time to define proper value lists, everyone will gain from. APEX allows you to work in 'agile' mode, means you can improve your application step by step. Let's say you like to reference a document, or even upload the same, you can do that. Or, you need to classify the CEMLIs by release, just add this release field, same for business area or CEMLI type. One CEMLI record may then look like this: Prepare one or two (online) reports, to be ready to present your "workload" to the project management. Use such extracts also when you work offline (to prioritize etc.). But as soon as you are again connected, feed the data back into the central application. Note: I have combined this application with an additional issue tracker.  Here the most important element is the CEMLI reference, which acts as link to any other application (if you are not using APEX also as issue tracker :).  Please spend a minute to define such a reference (see blog Part 8: How to name Customizations).   Summary: Building the bridge from Gap analyse to the development has to be done in a controlled way. Usually the information is provided differently, but it is suggested to collect all requirements centrally. Oracle APEX is a great solution to enter and maintain such information in a structured, but flexible way. APEX helped me a lot to work with distributed development teams during the complete development cycle.

    Read the article

  • Responsive website VS mobile website

    - by Saif Bechan
    I am creating a new blog. Nowadays, especially for a blog, it's important that the websites are accessible for all devices. Now I have to make a choice on what to do. I have seen 2 options. Option 1 is to go with a normal fixed website, for example 960px wide (grid960). And for mobile users have a mobile version. This takes some more time, but then there are 2 good versions of the website. Option 2 I haven't seen a lot yet, creating a adaptive website, or also called responsive website. I am now looking into the LESS framework, where the website automatically switches to to required width. Only downside is that when the normal browser is re-sized, everything re-sizes. Another problem I found is that pinch-to-zoom on devices does not work. Now the question is, which one would you prefer for a blog. One that constantly changes layout when you move your device, or one where you have the choice to view mobile and normal. If there are any other options, please let me know.

    Read the article

  • Creating PDF Documents with ASP.NET and iTextSharp

    The Portable Document Format (PDF) is a popular file format for documents. Due to their ubiquity and layout capabilities, it's not uncommon for a websites to use PDF technology. For example, an eCommerce store may offer a "printable receipt" option that, when selected, displays a PDF file within the browser. Last week's article, Filling in PDF Forms with ASP.NET and iTextSharp, looked at how to work with a special kind of PDF document, namely one that has one or more fields defined. A PDF document can contain various types of user interface elements, which are referred to as fields. For instance, there is a text field, a checkbox field, a combobox field, and more. Typically, the person viewing the PDF on her computer interacts with the document's fields; however, it is possible to enumerate and fill a PDF's fields programmatically, as we saw in last week's article. This article continues our investigation into iTextSharp, a .NET open source library for PDF generation, showing how to use iTextSharp to create PDF documents from scratch. We start with an example of how to programmatically define and piece together paragraphs, tables, and images into a single PDF file. Following that, we explore how to use iTextSharp's built-in capabilities to convert HTML into PDF. Read on to learn more! Read More >

    Read the article

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    - by smisner
    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details. And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!) Multidimensional Thinking: The Presentation My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract: “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.” If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did! Kimball All the Way There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading. Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right! Schema Generation One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops. Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services. If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

    Read the article

  • Web developing- Strange happenings

    - by Jason
    As I'm teaching myself PHP and MySQL during break, I'm experimenting coding in a Ubuntu virtual machine where Apache, MySQL and PHP have been installed and configured to a shared folder. I'm not a big fan of Kompozer because the source code layout is a PIA, so I've started checking out gPHPEdit. However, since using it, I've come across two issues: when I edit the .html and .php files, sometimes the file extension will change to .html~ and .php~, becoming invisible to the browser. The only solution is to switch to Windows, right click and rename the file extension. In Ubuntu Firefox, when I click on my prpject's Submit button for in a practice form, a dialog box pops up asking what Firefox should do with the .php file, rather than simply displaying it in the browser. When I do this in Windows Chrome & Firefox, it goes right to the response page. I'm not sure if this behavior is limited to gPHPEdit/Kompozer, but I've never noticed this happening in Dreamweaver. Any solutions? EDIT The behavior in Point 1 occurs both when Dreamweaver is open in Windows accessing the same files and when it is not. I changed the extension filename of welcome.php, added a comment in gPHPEdit, and the file changed to welcome.php~ upon saving.

    Read the article

  • Required Parameters [SSIS Denali]

    - by jamiet
    SQL Server Integration Services (SSIS) in its 2005 and 2008 incarnations expects you to set a property values within your package at runtime using Configurations. SSIS developers tend to have rather a lot of issues with SSIS configurations; in this blog post I am going to highlight one of those problems and how it has been alleviated in SQL Server code-named Denali.   A configuration is a property path/value pair that exists outside of a package, typically within SQL Server or in a collection of one or more configurations in a file called a .dtsConfig file. Within the package one defines a pointer to a configuration that says to the package “When you execute, go and get a configuration value from this location” and if all goes well the package will fetch that configuration value as it starts to execute and you will see something like the following in your output log: Information: 0x40016041 at Package: The package is attempting to configure from the XML file "C:\Configs\MyConfig.dtsConfig". Unfortunately things DON’T always go well, perhaps the .dtsConfig file is unreachable or the name of the SQL Sever holding the configuration value has been defined incorrectly – any one of a number of things can go wrong. In this circumstance you might see something like the following in your log output instead: Warning: 0x80012014 at Package: The configuration file "C:\Configs\MyConfig.dtsConfig" cannot be found. Check the directory and file name. The problem that I want to draw attention to here though is that your package will ignore the fact it can’t find the configuration and executes anyway. This is really really bad because the package will not be doing what it is supposed to do and worse, if you have not isolated your environments you might not even know about it. Can you imagine a package executing for months and all the while inserting data into the wrong server? Sounds ridiculous but I have absolutely seen this happen and the root cause was that no-one picked up on configuration warnings like the one above. Happily in SSIS code-named Denali this problem has gone away as configurations have been replaced with parameters. Each parameter has a property called ‘Required’: Any parameter with Required=True must have a value passed to it when the package executes. Any attempt to execute the package will result in an error. Here we see that error when attempting to execute using the SSMS UI: and similarly when executing using T-SQL: Error is: Msg 27184, Level 16, State 1, Procedure prepare_execution, Line 112 In order to execute this package, you need to specify values for the required parameters.   As you can see, SSIS code-named Denali has mechanisms built-in to prevent the problem I described at the top of this blog post. Specifying a Parameter required means that any packages in that project cannot execute until a value for the parameter has been supplied. This is a very good thing. I am loathe to make recommendations so early in the development cycle but right now I’m thinking that all Project Parameters should have Required=True, certainly any that are used to define external locations should be anyway. @Jamiet

    Read the article

  • Google Rolls Out a New and Compact Navigation Bar

    - by Jason Fitzpatrick
    Earlier this spring Google introduced the black navigation bar; now they’ve updated the bar to take up less space and be more useful. Although the black bar is useful in-so-far as it gives you quick access to Google services (useful, of course, only if you use those services) the new navigation bar–seen in the video above–includes an improved layout. Rather than use the bar space to spread out links which the user may or may not use the service links are now tucked into a mouse-over menu accessed by hovering on the Google logo. The majority of the space previously just taken up by links and the black bar itself is now a a search box. If you don’t already see the new interface, look for it to appear in your Google account within the next few days. Hit up the link below to read the official announcement. The Next Stage In Our Redesign [The Official Google Blog] HTG Explains: Understanding Routers, Switches, and Network Hardware How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To

    Read the article

  • Does waterfall require code complete before QA steps in?

    - by P.Brian.Mackey
    The process used at a certain company consists of: Create a layout according to some designs made in a web page design tool. (CSS, html) Requirements come in with "functional requirements". These consist of 100's of lines of business directions. E.G. Create a Table on page X. Column1 has numeric data. Column1 is the client code. Column2 is a string...etc. Write code to meet all functional requirements. When all code is checked in, send to QA (which is the BA that wrote the requirements) for inspection, bug finds and change requests. Punt back to developer with a list of X bugs and Y change requests. While bug finds or change requests 0 go to step 4. The agile development environments I have worked in allow, if not demand, early QA inspection and early user acceptance. So, pieces of the program can be refined and redefined before the entire application is in place. Not only that, but the process leaves little room for error or people changing their minds. Instead, those "change requests" come in at the last stage when they do the most damage. And being that a bug-fix's cost increases over time, this is a costly way to write code. I am no waterfall expert. As described, is this waterfall being mishandled in some way? How does waterfall address my concerns?

    Read the article

  • Bitmask data insertions in SSDT Post-Deployment scripts

    - by jamiet
    On my current project we are using SQL Server Data Tools (SSDT) to manage our database schema and one of the tasks we need to do often is insert data into that schema once deployed; the typical method employed to do this is to leverage Post-Deployment scripts and that is exactly what we are doing. Our requirement is a little different though, our data is split up into various buckets that we need to selectively deploy on a case-by-case basis. I was going to use a SQLCMD variable for each bucket (defaulted to some value other than “Yes”) to define whether it should be deployed or not so we could use something like this in our Post-Deployment script: IF ($(DeployBucket1Flag) = 'Yes')BEGIN   :r .\Bucket1.data.sqlENDIF ($(DeployBucket2Flag) = 'Yes')BEGIN   :r .\Bucket2.data.sqlENDIF ($(DeployBucket3Flag) = 'Yes')BEGIN   :r .\Bucket3.data.sqlEND That works fine and is, I’m sure, a very common technique for doing this. It is however slightly ugly because we have to litter our deployment with various SQLCMD variables. My colleague James Rowland-Jones (whom I’m sure many of you know) suggested another technique – bitmasks. I won’t go into detail about how this works (James has already done that at Using a Bitmask - a practical example) but I’ll summarise by saying that you can deploy different combinations of the buckets simply by supplying a different numerical value for a single SQLCMD variable. Each bit of that value’s binary representation signifies whether a particular bucket should be deployed or not. This is better demonstrated using the following simple script (which can be easily leveraged inside your Post-Deployment scripts): /* $(DeployData) is a SQLCMD variable that would, if you were using this in SSDT, be declared in the SQLCMD variables section of your project file. It should contain a numerical value, defaulted to 0. In this example I have declared it using a :setvar statement. Test the affect of different values by changing the :setvar statement accordingly. Examples: :setvar DeployData 1 will deploy bucket 1 :setvar DeployData 2 will deploy bucket 2 :setvar DeployData 3   will deploy buckets 1 & 2 :setvar DeployData 6   will deploy buckets 2 & 3 :setvar DeployData 31  will deploy buckets 1, 2, 3, 4 & 5 */ :setvar DeployData 0 DECLARE  @bitmask VARBINARY(MAX) = CONVERT(VARBINARY,$(DeployData)); IF (@bitmask & 1 = 1) BEGIN     PRINT 'Bucket 1 insertions'; END IF (@bitmask & 2 = 2) BEGIN     PRINT 'Bucket 2 insertions'; END IF (@bitmask & 4 = 4) BEGIN     PRINT 'Bucket 3 insertions'; END IF (@bitmask & 8 = 8) BEGIN     PRINT 'Bucket 4 insertions'; END IF (@bitmask & 16 = 16) BEGIN     PRINT 'Bucket 5 insertions'; END An example of running this using DeployData=6 The binary representation of 6 is 110. The second and third significant bits of that binary number are set to 1 and hence buckets 2 and 3 are “activated”. Hope that makes sense and is useful to some of you! @Jamiet P.S. I used the awesome HTML Copy feature of Visual Studio’s Productivity Power Tools in order to format the T-SQL code above for this blog post.

    Read the article

  • Keeping multiple root directories in a single partition

    - by intuited
    I'm working out a partition scheme for a new install. I'd like to keep the root filesystem fairly small and static, so that I can use LVM snapshots to do backups without having to allocate a ton of space for the snapshot. However, I'd also like to keep the number of total partitions small. Even with LVM, there's inevitably some wasted space and it's still annoying and vaguely dangerous to allocate more. So there seem to be a couple of different options: Have the partition that will contain bulky, variable files, like /srv, /var, and /home, be the root partition, and arrange for the core system state — /etc, /usr, /lib, etc. — to live in a second partition. These files can (I think) be backed up using a different backup scheme, and I don't think LVM snapshots will be necessary for them. The opposite: putting the big variable directories on the second partition, and having the essential system directories live on the root FS. Either of these options require that certain directories be pointers of some variety to subdirectories of a second partition. I'm aware of two different ways to do this: symlinks and bind-mounts. Is one better than the other for this purpose? Is there another option? Do any of the various Ubuntu installation media/strategies support this style of partition layout?

    Read the article

  • Questions about Mac Book Pro Keyboard and shortcuts

    - by SimonSolnes
    I have been researching a lot about this and I cannot seem to find any useful information about this topic, what so ever. I have now been using Ubuntu in one week, and have gotten pretty confident with almost everything. Except keyboard layout and shortcuts. If you know a tutorial or a documents explaining keyboard shortcuts in Ubuntu or linux in general, can you please list it? I use a Mac Book Pro with a Norwegian keyboard, and I have several questions about this: Is there a program for having a consise list of absolutely all keyboard shortcuts, and be able to change them? How do I use my fn keys? (fn button doesnt do the job for some reason) How can I use my alt+letter-key-or-number-key or alt+shift+letter-key-or-number-key to get fancy characters. (Like I do on Mac OS X) How can I swap cmd and ctrl key system wide? Also I really want to be oriented around this subject, since this is the only thing holding me back on Ubuntu, so if there exists some in-depth material on this, it would be great. Also, if there exists some programs or material out there making it easier with Mac hardware I would enjoy that. Sorry if my questions seems vague. Thank you very much Simon

    Read the article

  • Silverlight Cream for May 27, 2010 -- #871

    - by Dave Campbell
    In this Issue: Phil Middlemiss, Max Paulousky, Jeff Wilcox, David Anson, René Schulte, Xianzhong Zhu, Jeff Handley, John Papa, Jeremy Likness, and Marlon Grech. Shoutouts: SilverLaw has a great demo at the Expression Gallery, and we're all going to look forward to the blog post explaining it: Flexible Surface Effect SilverLaw> has another use for the above in this text morphing Effect: Morphing Text Effect Matthias Shapiro contributed a chapter for a book on Visualization and it's available as a free download: Free Chapter From Beautiful Visualization Andy Beaulieu has a demo up as almost a spoiler for a future Coding4Fun app... and how cool is this: Shuffleboard: A Windows Phone 7 Sample Game From SilverlightCream.com: Separating Content and Presentation with the ContentControl Phil Middlemiss' latest is out on SilverlightShow and is all about the ContentControl and separating layout and content ... demo project source included Search Engine Optimization (SEO) for Silverlight Applications. Part 1 Max Paulousky has part one of a long series he's starting on a demo project to explain a bunch of MEF, MVVM, and WCF RIA concepts. This first one contains the overview and also discusses SEO. There is a link to the app and material in the post if you read Russian :) Updated Silverlight Unit Test Framework bits for Windows Phone and Silverlight 3 Jeff Wilcox has available updated Unit Test bits for Silverlight 3 -- read that as WP7... read the rest of the information on his post. Easily animate orientation changes for any Windows Phone application with this handy source code David Anson has some code up that you're going to want if you're programming WP7 ... just watch the video ... you'll be downloading the code just like I did :) SilverShader – Introduction to Silverlight and WPF Pixel Shaders René Schulte has a post up at Coding4Fun about PixelShaders... how to write them and an application that uses them... this is a great long tutorial... a must read. Developing Freecell Game Using Silverlight 3 Part 2 Xianzhong Zhu has part 2 of his FreeCell game development posted ... lots of detailed descriptions and code, plus all the code of course! Async Validation with RIA Services Jeff Handley has a post up that is sort of a follow-on to a year-old post on async validation with RIA services and DataForm and how it's all much easier now in SL4. Learning Blend with .toolbox (Silverlight TV #29) John Papa and Arturo Toledo discuss .toolbox in Silverlight TV #29 -- have you made yourself an avatar yet? ... well go get on-board with this great learning tool! Silverlight Out of Browser Dynamic Modules in Offline Mode OOB isn't difficult, dynamic modules can become a bit more, but what if you're OOB... ok what if you're OOB and offline? ... Jeremy Likness has a possible solution for this with an OfflineCatalog. MEFedMVVM v1.0 Explained Marlon Grech has a great into to MEFedMVVM in this post. If you're trying to get your head around MEF and MVVM in either WPF or Silverlight, here's a good starting point. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Vertex buffer acting strange? [on hold]

    - by Ryan Capote
    I'm having a strange problem, and I don't know what could be causing it. My current code is identical to how I've done this before. I'm trying to render a rectangle using VBO and orthographic projection.   My results:     What I expect: 3x3 rectangle in the top left corner   #include <stdio.h> #include <GL\glew.h> #include <GLFW\glfw3.h> #include "lodepng.h"   static const int FALSE = 0; static const int TRUE = 1;   static const char* VERT_SHADER =     "#version 330\n"       "layout(location=0) in vec4 VertexPosition; "     "layout(location=1) in vec2 UV;"     "uniform mat4 uProjectionMatrix;"     /*"out vec2 TexCoords;"*/       "void main(void) {"     "    gl_Position = uProjectionMatrix*VertexPosition;"     /*"    TexCoords = UV;"*/     "}";   static const char* FRAG_SHADER =     "#version 330\n"       /*"uniform sampler2D uDiffuseTexture;"     "uniform vec4 uColor;"     "in vec2 TexCoords;"*/     "out vec4 FragColor;"       "void main(void) {"    /* "    vec4 texel = texture2D(uDiffuseTexture, TexCoords);"     "    if(texel.a <= 0) {"     "         discard;"     "    }"     "    FragColor = texel;"*/     "    FragColor = vec4(1.f);"     "}";   static int g_running; static GLFWwindow *gl_window; static float gl_projectionMatrix[16];   /*     Structures */ typedef struct _Vertex {     float x, y, z, w;     float u, v; } Vertex;   typedef struct _Position {     float x, y; } Position;   typedef struct _Bitmap {     unsigned char *pixels;     unsigned int width, height; } Bitmap;   typedef struct _Texture {     GLuint id;     unsigned int width, height; } Texture;   typedef struct _VertexBuffer {     GLuint bufferObj, vertexArray; } VertexBuffer;   typedef struct _ShaderProgram {     GLuint vertexShader, fragmentShader, program; } ShaderProgram;   /*   http://en.wikipedia.org/wiki/Orthographic_projection */ void createOrthoProjection(float *projection, float width, float height, float far, float near)  {       const float left = 0;     const float right = width;     const float top = 0;     const float bottom = height;          projection[0] = 2.f / (right - left);     projection[1] = 0.f;     projection[2] = 0.f;     projection[3] = -(right+left) / (right-left);     projection[4] = 0.f;     projection[5] = 2.f / (top - bottom);     projection[6] = 0.f;     projection[7] = -(top + bottom) / (top - bottom);     projection[8] = 0.f;     projection[9] = 0.f;     projection[10] = -2.f / (far-near);     projection[11] = (far+near)/(far-near);     projection[12] = 0.f;     projection[13] = 0.f;     projection[14] = 0.f;     projection[15] = 1.f; }   /*     Textures */ void loadBitmap(const char *filename, Bitmap *bitmap, int *success) {     int error = lodepng_decode32_file(&bitmap->pixels, &bitmap->width, &bitmap->height, filename);       if (error != 0) {         printf("Failed to load bitmap. ");         printf(lodepng_error_text(error));         success = FALSE;         return;     } }   void destroyBitmap(Bitmap *bitmap) {     free(bitmap->pixels); }   void createTexture(Texture *texture, const Bitmap *bitmap) {     texture->id = 0;     glGenTextures(1, &texture->id);     glBindTexture(GL_TEXTURE_2D, texture);       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap->width, bitmap->height, 0,              GL_RGBA, GL_UNSIGNED_BYTE, bitmap->pixels);       glBindTexture(GL_TEXTURE_2D, 0); }   void destroyTexture(Texture *texture) {     glDeleteTextures(1, &texture->id);     texture->id = 0; }   /*     Vertex Buffer */ void createVertexBuffer(VertexBuffer *vertexBuffer, Vertex *vertices) {     glGenBuffers(1, &vertexBuffer->bufferObj);     glGenVertexArrays(1, &vertexBuffer->vertexArray);     glBindVertexArray(vertexBuffer->vertexArray);       glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj);     glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 6, (const GLvoid*)vertices, GL_STATIC_DRAW);       const unsigned int uvOffset = sizeof(float) * 4;       glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);     glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)uvOffset);       glEnableVertexAttribArray(0);     glEnableVertexAttribArray(1);       glBindBuffer(GL_ARRAY_BUFFER, 0);     glBindVertexArray(0); }   void destroyVertexBuffer(VertexBuffer *vertexBuffer) {     glDeleteBuffers(1, &vertexBuffer->bufferObj);     glDeleteVertexArrays(1, &vertexBuffer->vertexArray); }   void bindVertexBuffer(VertexBuffer *vertexBuffer) {     glBindVertexArray(vertexBuffer->vertexArray);     glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj); }   void drawVertexBufferMode(GLenum mode) {     glDrawArrays(mode, 0, 6); }   void drawVertexBuffer() {     drawVertexBufferMode(GL_TRIANGLES); }   void unbindVertexBuffer() {     glBindVertexArray(0);     glBindBuffer(GL_ARRAY_BUFFER, 0); }   /*     Shaders */ void compileShader(ShaderProgram *shaderProgram, const char *vertexSrc, const char *fragSrc) {     GLenum err;     shaderProgram->vertexShader = glCreateShader(GL_VERTEX_SHADER);     shaderProgram->fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);       if (shaderProgram->vertexShader == 0) {         printf("Failed to create vertex shader.");         return;     }       if (shaderProgram->fragmentShader == 0) {         printf("Failed to create fragment shader.");         return;     }       glShaderSource(shaderProgram->vertexShader, 1, &vertexSrc, NULL);     glCompileShader(shaderProgram->vertexShader);     glGetShaderiv(shaderProgram->vertexShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile vertex shader.");         return;     }       glShaderSource(shaderProgram->fragmentShader, 1, &fragSrc, NULL);     glCompileShader(shaderProgram->fragmentShader);     glGetShaderiv(shaderProgram->fragmentShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile fragment shader.");         return;     }       shaderProgram->program = glCreateProgram();     glAttachShader(shaderProgram->program, shaderProgram->vertexShader);     glAttachShader(shaderProgram->program, shaderProgram->fragmentShader);     glLinkProgram(shaderProgram->program);          glGetProgramiv(shaderProgram->program, GL_LINK_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to link shader.");         return;     } }   void destroyShader(ShaderProgram *shaderProgram) {     glDetachShader(shaderProgram->program, shaderProgram->vertexShader);     glDetachShader(shaderProgram->program, shaderProgram->fragmentShader);       glDeleteShader(shaderProgram->vertexShader);     glDeleteShader(shaderProgram->fragmentShader);       glDeleteProgram(shaderProgram->program); }   GLuint getUniformLocation(const char *name, ShaderProgram *program) {     GLuint result = 0;     result = glGetUniformLocation(program->program, name);       return result; }   void setUniformMatrix(float *matrix, const char *name, ShaderProgram *program) {     GLuint loc = getUniformLocation(name, program);       if (loc == -1) {         printf("Failed to get uniform location in setUniformMatrix.\n");         return;     }       glUniformMatrix4fv(loc, 1, GL_FALSE, matrix); }   /*     General functions */ static int isRunning() {     return g_running && !glfwWindowShouldClose(gl_window); }   static void initializeGLFW(GLFWwindow **window, int width, int height, int *success) {     if (!glfwInit()) {         printf("Failed it inialize GLFW.");         *success = FALSE;        return;     }          glfwWindowHint(GLFW_RESIZABLE, 0);     *window = glfwCreateWindow(width, height, "Alignments", NULL, NULL);          if (!*window) {         printf("Failed to create window.");         glfwTerminate();         *success = FALSE;         return;     }          glfwMakeContextCurrent(*window);       GLenum glewErr = glewInit();     if (glewErr != GLEW_OK) {         printf("Failed to initialize GLEW.");         printf(glewGetErrorString(glewErr));         *success = FALSE;         return;     }       glClearColor(0.f, 0.f, 0.f, 1.f);     glViewport(0, 0, width, height);     *success = TRUE; }   int main(int argc, char **argv) {          int err = FALSE;     initializeGLFW(&gl_window, 480, 320, &err);     glDisable(GL_DEPTH_TEST);     if (err == FALSE) {         return 1;     }          createOrthoProjection(gl_projectionMatrix, 480.f, 320.f, 0.f, 1.f);          g_running = TRUE;          ShaderProgram shader;     compileShader(&shader, VERT_SHADER, FRAG_SHADER);     glUseProgram(shader.program);     setUniformMatrix(&gl_projectionMatrix, "uProjectionMatrix", &shader);       Vertex rectangle[6];     VertexBuffer vbo;     rectangle[0] = (Vertex){0.f, 0.f, 0.f, 1.f, 0.f, 0.f}; // Top left     rectangle[1] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top right     rectangle[2] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[3] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top left     rectangle[4] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[5] = (Vertex){3.f, 3.f, 0.f, 1.f, 1.f, 1.f}; // Bottom right       createVertexBuffer(&vbo, &rectangle);            bindVertexBuffer(&vbo);          while (isRunning()) {         glClear(GL_COLOR_BUFFER_BIT);         glfwPollEvents();                    drawVertexBuffer();                    glfwSwapBuffers(gl_window);     }          unbindVertexBuffer(&vbo);       glUseProgram(0);     destroyShader(&shader);     destroyVertexBuffer(&vbo);     glfwTerminate();     return 0; }

    Read the article

  • What are the boundaries between the responsibilities of a web designer and a web developer?

    - by Beofett
    I have been hired to do functional development for several web site redesigns. The company I work for has a relatively low technical level, and the previous development of the web sites were completed by a graphic designer who is self taught as far as web development is concerned. My responsibilities have extended beyond basic development, as I have been also tasked with creating the development environment, and migrating hosting from external CMS hosting to internal servers incorporating scripting languages (I opted for PHP/MySQL). I am working with the graphic designer, and he is responsible for the creative design of the web. We are running into a bit of friction over confusion between the boundaries of our respective tasks. For example, we had some differences of opinion on navigation. I was primarily concerned with ease-of-use (the majority of our userbase are not particularly web-savvy), as well as meeting W3 WAI standards (many of our users are older, and we have a higher than average proportion of users with visual impairment). His sole concern was what looked best for the website, and I felt that the direction he was pushing for caused some functional problems. I feel color choices, images, fonts, etc. are clearly his responsibility, and my expectation was that he would simply provide me with the CSS pages and style classes and IDs to use, but some elements of page layout also seem to fall more under the realm of "usability", which to me translates as near-synonymous with "functionality". I've been tasked with selecting the tools we'll use, which include frameworks, scripting languages, database design, and some open source applications (Moodle for example, and quite probably Drupal in the future). While these tools are quite customizable, working directly with some of the interfaces is beyond his familiarity with CSS, HTML, and PHP. This limits how much direct control he has over the appearance, which has lead to some discussion about the tool choices. Is there a generally accepted dividing line between the roles of a web designer and a web developer? Does his relatively inexperienced background in web technologies influence that dividing line?

    Read the article

  • How Do I Print Photos?

    - by Takkat
    Other than for Windows in Ubuntu there are no fancy utilities provided from printer manufacturers to print photos. I am aware of Gnome Photo Printer and of Photoprint, the first being easy to handle, the latter having more options. However I wonder if there are any other or maybe even better alternatives (including plugins) to perform the following tasks: Print photos in the best photo-resolution the driver offers Adjust paper size for standard values of photo papers Choose paper tray if the printer has more than one Print out multiple photos on one page including mixed sizes (grids) Multiple prints with same settings Borderless printing if the printer is capable of this Any additional options like pre-processing for color correction or noise reduction would be nice to have but are not so essential. Update According to this spec it seems not to so easy to accomplish the simple task of printing photos. Indeed all applications I have gone through have major drawbacks that make printing photos almost impossible. Below I will list what put me off using them for photo printing: Gnome Photo Printer: no thumbnails, no grids Photoprint: does not keep settings, GUI broken, no standard photo size, no thumbs Eye Of Gnome: no multiple pages, no grids Gimp + Images Grid Layout: far too many steps to finally find that prints are always different to their previews. F-Spot: no grids Picasa 3: no grids, very few fixed paper sizes, 300 dpi only flPhoto: strange GUI, no thumbs, no printer settings, did not print at all Windows: Ooops - everything works fine! But I want Ubuntu to do this! After half a pack of ink cartridges and half a pack of photo paper cards I am getting tired of testing. At least Gimp and Picasa looked promising but both don't keep their promise when it comes to printing. I'd already be happy to quickly print a few photos with EOG if bug #80220 was fixed - but it's still on "wishlist".

    Read the article

  • How do I align my partition table properly?

    - by Jorge Castro
    I am in the process of building my first RAID5 array. I've used mdadm to create the following set up: root@bondigas:~# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Wed Oct 20 20:00:41 2010 Raid Level : raid5 Array Size : 5860543488 (5589.05 GiB 6001.20 GB) Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 20 20:13:48 2010 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 1% complete UUID : f6dc829e:aa29b476:edd1ef19:85032322 (local to host bondigas) Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 spare rebuilding /dev/sde While that's going I decided to format the beast with the following command: root@bondigas:~# mkfs.ext4 /dev/md1p1 mke2fs 1.41.11 (14-Mar-2010) /dev/md1p1 alignment is offset by 63488 bytes. This may result in very poor performance, (re)-partitioning suggested. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=16 blocks, Stripe width=48 blocks 97853440 inodes, 391394047 blocks 19569702 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 11945 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: ^C 27/11945 root@bondigas:~# ^C I am unsure what to do about "/dev/md1p1 alignment is offset by 63488 bytes." and how to properly partition the disks to match so I can format it properly.

    Read the article

  • C# XNA: Effecient mesh building algorithm for voxel based terrain ("top" outside layer only, non-destructible)

    - by Tim Hatch
    To put this bluntly, for non-destructible/non-constructible voxel style terrain, are generated meshes handled much better than instancing? Is there another method to achieve millions of visible quad faces per scene with ease? If generated meshes per chunk is the way to go, what kind of algorithm might I want to use based on only EVER needing the outer layer rendered? I'm using 3D Perlin Noise for terrain generation (for overhangs/caves/etc). The layout is fantastic, but even for around 20k visible faces, it's quite slow using instancing (whether it's one big draw call or multiple smaller chunks). I've simplified it to the point of removing non-visible cubes and only having the top faces of my cube-like terrain be rendered, but with 20k quad instances, it's still pretty sluggish (30fps on my machine). My goal is for the world to be made using quite small cubes. Where multiple games (IE: Minecraft) have the player 1x1 cube in width/length and 2 high, I'm shooting for 6x6 width/length and 9 high. With a lot of advantages as far as gameplay goes, it also means I could quite easily have a single scene with millions of truly visible quads. So, I have been trying to look into changing my method from instancing to mesh generation on a chunk by chunk basis. Do video cards handle this type of processing better than separate quads/cubes through instancing? What kind of existing algorithms should I be looking into? I've seen references to marching cubes a few times now, but I haven't spent much time investigating it since I don't know if it's the better route for my situation or not. I'm also starting to doubt my need of using 3D Perlin noise for terrain generation since I won't want the kind of depth it would seem best at. I just like the idea of overhangs and occasional cave-like structures, but could find no better 'surface only' algorithms to cover that. If anyone has any better suggestions there, feel free to throw them at me too. Thanks, Mythics

    Read the article

  • NetBeans PHP Community Council

    - by Tomas Mysik
    Hi all, today we would like to inform all of you that now you have a chance to improve NetBeans via NetBeans PHP Community Council. The author of this activity is Timur Poperecinii and he would like to tell you a few words about it. Hello passionate technical people, First of all let me introduce myself: my name is Timur, I’m a developer from Moldova (that little country between Romania and Ukraine), I develop mostly in .NET and JQuery, but I love to learn more, not being an expert I am familiar with Java (Struts2, Play), PHP (Symfony2), Ruby (Rails), Sencha Touch 2 and other technologies. I was “introduced” in PHP recently by a client of mine who requested to make the work specifically in PHP. Let me tell you a little story about my experience with open source and IDEs: when I was studying in university in 2007 I think, I did a simple little application in PHP and thought “Damn, if only there was a good IDE for PHP so I could relax and no having to remember all the function names”, then when I searched on internet pretty much everyone was using Vim or Emacs on Linux, but it had no autocomplete anyway, just syntax highlighting. I remember using some tool like Notepad++ I think. Nowadays everything changed, we have highlighting and autocomplete for about all standard things in PHP in many IDEs. I use NetBeans for PHP, and I really am happy with the experience I have there with standard PHP code, but for frameworks I still think there is lots of room for improvements. For example we have some Symfony 2 and Twig support. But I’d love to see more of that coming, for example I’m a big fan of file templates, where the main goal is to not waste time on writing over and over again something that can be generated, and it counts even more when you don’t have a lot of autocomplete. So what I thought, “Hey I know Java a little, and NetBeans has plugins, so may be it worth trying to do a file templates plugin”, and so I did, you can find details about my Unified Udevi Symfony2 Plugin for NetBeans 7.2 on my blog. It wasn’t hard, and it even was fun! Give back to open source Now think a little, NetBeans is an open source project and PHP support is just a part of it, so the resources are pretty limited in this area. But we as a community that uses this product, want to have the best possible experience with PHP and frameworks(!!!). So why don’t we GIVE BACK TO OPENSOURCE ? Imagine an IDE that can do all the things you wanted + it is free. Now how far is NetBeans from that point? I guess not so far – you might miss a little niche thing that you use on a daily basis, but then the question appears why don’t you make it happen on your own? NetBeans PHP Community Council What I proposed is to create a NetBeans PHP Community Council that will be formed of people willing to change something, willing to create plugins for their own needs and for the needs of the community, test the plugins created by them too, and basically evolve NetBeans in direction they want to reach. I already talked with the NetBeans PHP team. They are only happy to help this Council, with technical advises, opening some APIs we might need to have access to, and other things. One important thing to mention is that this Council is a Community project, so though we’ll have direct discussions with NetBeans PHP Dev team, NetBeans is not the leading force here, it is the community. You can see more details about the goals and structure I proposed at NetBeans PHP Community Council wiki page. We use this mail list: [email protected] for discussions and topics related to the Council. How can I join To join the NetBeans PHP Community Council please send an email to [email protected] with the subject of the mail starting with [Council New Member]. You can subscribe to this mail list here:http://netbeans.org/projects/php/lists. in your mail please indicate your location, age and experience both in Java and PHP. I need these data to assign you to a team. A response will be send to you with your next assignment and some people to contact. I really hope that you’ll make a step forward and try to make your everyday use of NetBeans even more fun.

    Read the article

  • UPK Hands-on Labs at OHUG

    - by Karen Rihs
    Going to OHUG, June 18-22? Be sure to attend one or more UPK hands-on labs! Choose from Basic, Advanced, What's New, and Prebuilt Content!   Oracle User Productivity Kit 11.1 Workshop – Basic Stephen Armbruster, Oracle Corporation June 19, 2012, 11:00 a.m. – 12:00 p.m. June 20, 2012, 4:30 – 5:30 p.m. The User Productivity Kit (UPK) is a comprehensive, cost-effective, customizable solution that helps your organization quickly create the critical documentation, training, and support materials needed to drive project team and user productivity throughout the lifecycle of your software. The User Productivity Kit provides system process documentation, user acceptance test scripts, comprehensive instructor-led training materials, web-based training materials, role-based performance support, and complete documentation. Also provided is the UPK Developer, which serves as a single-source development and customization tool to enable rapid content creation and customization. The User Productivity Kit delivers: Business process documentation for fit-gap analysis - providing time and cost savings that jump-start your implementation or upgrade User Acceptance test scripts to help test applications prior to go-live State-of-the-art instructional design tools to rapidly build and tailor documentation, instructor-led training materials, and web-based training to fit organizational needs Live-application performance support with transactional and procedural information to maximize user efficiency. By registering for this hands-on UPK workshop, participants will use UPK to build an application job aid and simulation that can be used as performance support for the application. But hurry, space is limited! Oracle User Productivity Kit 11.1 Workshop – Advanced Stephen Armbruster, Oracle Corporation June 20, 2012, 1:30 – 2:30 p.m. This special workshop is for those already familiar with UPK and will cover advanced concepts. In this workshop, you will gain an in-depth knowledge of working with the UPK Developer. Following this workshop, you will be able to: Create publishing categories Add a logo to a publishing project Publish using the newly created category Configure your own library view Manage topic history in a multi-user environment Oracle User Productivity Kit 11.1 Workshop – What’s NEW! Stephen Armbruster, Oracle Corporation June 19, 2012, 1:30 – 2:30 p.m. June 21, 2012, 1:00 – 2:00 p.m. This special workshop is for those already familiar with UPK and will focus on the new features included in the latest version 11.1. In this workshop, you will review most of the new features included in the UPK Developer. Oracle User Productivity Kit 11.1 Workshop – Prebuilt Content Stephen Armbruster, Oracle Corporation June 19, 2012, 4:30 – 5:30 p.m. June 21, 2012, 2:15 – 3:15 p.m. This special workshop is for those already familiar with UPK and will focus on the latest version 11.1. At the end of this workshop, you will be able to demonstrate how to: Import prebuilt content Modify content frames Add a decision frame Translate a topic into Spanish Stephen Armbruster is a principal sales consultant, specializing in HCM and UPK applications for Oracle over the past twelve years. In addition to his current role, he serves as an ambassador for the Fusion User Experience (UX) team and is tasked with evangelizing the UX for end users across all Oracle brands (Fusion, PSFT, JDE, and EBS).  He is also a trusted advisor to Oracle’s Product Management teams related to Learning Management Systems (LMS). Prior to joining Oracle, he was an instructor as well as an instructional technologist working in the medical diagnostics, high tech, and information management industries. As an expert in both LMS and UPK, he regularly speaks at Oracle conferences including Oracle OpenWorld and OHUG on topics that span using Oracle solutions to accomplish employee training, certification, and user adoption. His presentations are both entertaining and engaging.

    Read the article

  • Need assistance matching a general theme style as well as eCommerce capability

    - by humble_coder
    I'm in the process of acquiring a new design client. They are getting into the business of "auto parts wholesaling" and they want a storefront. My preference is/was to create something from scratch. However, here is an established trend in their particular market (similar parts, layout, etc). They insist on following the existing visual trend, as per the following: http://www.xtremediesel.com/ http://www.thoroughbreddiesel.com/ http://www.alligatorperformance.com/ My plan of attack at this point is to find a comparable WP theme and a flexible (but useful) backend/product management. Their current demo site (which their previous developer made a stab at) is using Pinnacle Cart. It is no where near what they need, nor is it intuitive to work with. I was actually considering Magento for its greater abilities but I'm still considering options. That said, my two primary dilemmas are as follows: 1) I need a theme that mimics the general style of those listed. They explicitly said they didn't want anything too clean (e.g. ThemeForest, Woothemes) as it "wasn't rugged or busy looking enough" for their field. 2) I need a WP/Magento/WP e-Commerce (or any one of a host of other) plugin that will allow for bulk import/update of nearly 200,000 products, descriptions and images. I'm not opposed to manually interfacing with the DB for import, but in the end, I need a store/system that doesn't needlessly add 50 tables to accommodate some "wet behind the ears" concept of table normalization and is easy to add to. Anyway, if anyone has any quality suggestions regarding either of these issues, it would be most appreciated. Best.

    Read the article

< Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >