Search Results

Search found 4760 results on 191 pages for 'coding guideline'.

Page 59/191 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Can anyone give me a sample DSP script in C/C++

    - by Andrew
    Im working on a (Audio) DSP project and just wondering if there are any sample (Open source) DSP example that are written in c or c++, for my MSP430 Chip. I just want something as a guideline so i can program my own script using the ACD and DCA on my board for sampling. http://focus.ti.com/docs/toolsw/folders/print/msp-exp430f5438.html Thats my board, MSP430F5438 Experimenter Board, from what i herd it can run dsp script via the USB connection with the computer. Im using CCS ( From TI, code composer studio) and Octave/Matlab. Just any DSP example scripts or sites that will help me create my own would be appreciated. What im tying to do, Partial audio (sampled) track -- Nyquist rate sampling -- over- and undersampling -- reconstruction of the audio track.

    Read the article

  • mailbox in PHP How be designed

    - by sandy
    How do I design mailbox in my site ... I am new to the PHP language ... And asked me to design mailbox for each member in my site... As the site contains the members, personal page for each member and the incoming fund that in which he can read private messages as well as send new messages, and also keep the messages sent .. How do I do that ...???? I just want a guideline for the right way .... How do I start to design ..???

    Read the article

  • When is performance gain significant enough to implement that optimization?

    - by Zwei steinen
    Hi, following the text book, I do measure performance whenever I try optimizing my code. Sometimes, however, the performance gain is rather small and I can't decisively decide whether I should implement that optimization. For example, when a fix shortens an average response time of 100ms to 90ms under some conditions, should I implement that fix? What if it shortens 200ms to 190ms? How many condition should I try before I can conclude that it will be beneficial overall? I guess it's not possible to give a straight forward answer to this, as it depends on too many things, but is there a good rule of thumb that I should follow? Are there any guideline/best-practices?

    Read the article

  • What kind of icon should I deploy with my Android 1.x and 2.x application?

    - by licorna
    The thing is this, in Android 1.5 and 1.6 we had the Icon Design Guidelines. In this guide there are specifications for application icons. Every application should conform to this. However, in recent Android versions (2.0 and 2.1) icons have changed from the old to this new flat 2D style. Every icon in Nexus One has this style, so not even Google is conforming to the guideline. To see the differences between high and low density icons see this image and compare Evernote icon with the rest. I've been able to use different icons by using two directories with different icons: drawables-hdpi/icon.png and drawables/icon.png, BUT not every Android 2.x is going to be HDPI and not every 1.x Android device is going to be low pixel density. So the question is: Should I deploy different icons for different Android platform version within my apk file? and if I should, How do I do it?

    Read the article

  • GUI Design - Roleselection with two lists, selected roles on the right, or left list?

    - by subes
    Hi, image two lists for selecting roles for a new user creation in an administrator frontend. One list has all available roles in it and another has the selected roles in it. Between those lists are buttons to move elements from one list to another. Thus the layout is horizontal with [List] [Buttons] [List]. Now to the question: Should the selected elements be on the left, or the right list? Intuitively some people I asked say, that the selected elements have to be on the right list. But some other people follow some sort of guideline that says, that the more important stuff in a frontend has to be on the left side. Also they think the selected elements are more important than the non-selected and so the selected ones have to be on the left side. Whats your opinion?

    Read the article

  • Get the ID of user that like my page.

    - by jon
    Hello I would like to know if there is any API feature that can tell me which users liked my application page. I know that it is possible to check if a user is a fan of any given Page using page.isFan verification or to get the number of fans. But is it possible to get the fans' User ID? If so, is it possible to get that data ordered by the date they became fans of the page? For exemple usign FQL i can get the user ID of someone o like an facebook object like post or photo. http://developers.facebook.com/docs/reference/fql/like Can i do this for the people who liked my page? Thanks for any help or guideline .

    Read the article

  • Factorial Algorithms in different languages

    - by Brad Gilbert
    I want to see all the different ways you can come up with, for a factorial subroutine, or program. The hope is that anyone can come here and see if they might want to learn a new language. Ideas: Procedural Functional Object Oriented One liners Obfuscated Oddball Bad Code Polyglot Basically I want to see an example, of different ways of writing an algorithm, and what they would look like in different languages. Please limit it to one example per entry. I will allow you to have more than one example per answer, if you are trying to highlight a specific style, language, or just a well thought out idea that lends itself to being in one post. The only real requirement is it must find the factorial of a given argument, in all languages represented. Be Creative! Recommended Guideline: # Language Name: Optional Style type - Optional bullet points Code Goes Here Other informational text goes here I will ocasionally go along and edit any answer that does not have decent formatting.

    Read the article

  • How to code for Alternate Course AKA Rainy Day Scenary?

    - by janetsmith
    Alternate course is something when user doesn't do what you expected, e.g. key in wrong password, pressing back button, or database error. For any programming project, alternate course accounts for more than 50% of a project timeline. It is important. However, most computer books only focus on Basic Course (when everything goes fine). Basic course is rather simple, compared to Alternate course, because this is normally given by client. Alternate course is what we, as a programmer or Business Analyst needs to take care of. Java has some built-in mechanism (try-catch) to force us to handle those unexpected behavior. The question is, how to handle them? Any pattern to follow? Any guideline or industry practice for handling alternate course?

    Read the article

  • how to deal with pdf annotation with ipad in objective c?

    - by Sarah
    Hello, I know that it may sound a silly question but i am really very confused. I am to work with one application that is having operations like PDF loading,annotation, scrolling,zooming and other such functions. Now my question is that i am little bit confused about what template i should use as i went through Quartz 2D Programming Guide and was little bit confused whether i'll be able to apply the above shown functions with the same guideline,as it displays the pdf page on the whole screen. Or is there any other way around? Please help me..Can i use UIWebView for the same functions as i listed above? I ll be grateful if you can help me. Thank you.

    Read the article

  • How to distinguish properties from constants if they both use PascalCasing for naming?

    - by Gishu
    As stated in the framework design guidelines and the WWW in general, the current guideline is to name your constants like this LastTemplateIndex as opposed to LAST_TEMPLATE_INDEX With the PascalCasing approach, how do you differentiate between a Property and a Constant. ErrorCodes.ServerDown is fine. But what about private constants within your class ? I use quite a lot of them for naming magic numbers.. or for expected values in my unit tests and so on. The ALL_CAPS style helps me know that it is a constant . _testApp.SelectTemplate(LAST_TEMPLATE_INDEX); *Disclosure: I have been using the SCREAMING_CAPS style for a while for constants + I find it more readable than squishedTogetherPascalCasedName. It actually STANDS_OUT in a block of text*

    Read the article

  • Best Practice: Software Versioning

    - by Sebi
    I couldn't find a similar question here on SO, but if you find one please link. Is there any guideline or standard best practice how to version a software you develop in your spare time for fun, but nevertheless will be used by some people? I think it's necessary to version such software so that you know about with version one is talking about (e.g. for bug fixing, support, and so on). But where do I start the versioning? 0.0.0? or 0.0? And then how to I increment the numbers? major release.minor change? and should'nt any commit to a version control system be another version? or is this only for versions which are used in a productive manner?

    Read the article

  • Why can't I start XML entity names with 'xml''?

    - by flamey
    <XmlInfo /> I ran into problem using Perl's popular SOAP::Lite module, where it wouldn't accept XML entity names starting with "xml" (regardles letters' case). The author of the module replied to email saying that entity names starting with "xml" are not permitted in XML specification, but I couldn't find it in W3C's documents for 1.0 and 1.1 specs, I also couldn't find it in any of the articles or guidelines docs about XML entity naming. In fact some guideline documents used names starting with xml as an example, and lots of people are using it, as I see via Google Code Search. So are there any restrictions (besides which characters to use as defined in W3C's documents) in Entity naming in XML? Is there a restriction saying you can't name entities starting with "xml"?

    Read the article

  • How to test a .net application against a proxy?

    - by Pierre-Alain Vigeant
    I need to support the use of proxy on our application that is using WCF connections. We do not have any proxy server on our network and I don't want to disrupt our corporate network by requesting a proxy installation. I was thinking of installing a proxy server on a local virtual machine and configurating Internet Explorer so that it will challenge that proxy. I don't know what proxy software to use (I don't want to install ISA server) and I don't know how to configure one. Does someone have any suggestion for a easy to use software that will require an authentication for any WCF services and do you have any guideline that would be helpful to know when testing a software against a proxy?

    Read the article

  • How do i build a custom "Your Account" pay with wordpress?

    - by Colour Blend
    Hello everyone. Please can you help with a guideline on this requirement? I have this requirement that some visitors to my site would need to login and be provided access to an Accounts page. This page will contain a list of links added by the site administrator which just lead to pages with specific contents. Every page/link is unique to a user(Account). No two user will ever have to see one page. Please help me. I just need a guide, you don't need to spoon feed me. Thanks in advance.

    Read the article

  • Where are the best locations to write an error log in Windows?

    - by Keith Sirmons
    Where would you write an error log file, say ErrorLog.txt, in Windows? Keep in mind the path would need to be open to basic users for file write permissions. I know the eventlog is a possible location for writing errors, but does it work for "user" level permissions? EDIT: I am targeting Windows 2003, but I was posing the question in such a way as to have a "General Guideline" for where to write error logs. As for the EventLog, I have had issues before in an ASP.NET application where I wanted to log to the Windows event log, but I had security issues causing me heartache. (I do not recall the issues I had, but remember having them.)

    Read the article

  • How do i build a custom "Your Account" dynamic page with wordpress?

    - by Colour Blend
    Hello everyone. Please can you help with a guideline on this requirement? I have this requirement that some visitors to my site would need to login and be provided access to an Accounts page. This page will contain a list of links added by the site administrator which just lead to pages with specific contents. Every page/link is unique to a user(Account). No two user will ever have to see one page. Please help me. I just need a guide, you don't need to spoon feed me. Thanks in advance.

    Read the article

  • Qt: QAbstractItemModel and 'const'

    - by Eye of Hell
    Hello. I'm trying to use a QTreeView for a first time, with a QAbstractItemModel. And instantly it's a problem: QAbstractItemModel interface declares methods as 'const', assuming they will not change data. But i want a result of SQL query displayed, and returning data for a record with specified indeq REQUIRES to use QSqlQuery::seek() that is a non-const. Is it any 'official' guideline to use a QAbstractItemModel with data that MUST be changed in order to get number of items, data per item etc? Or i must hack C++ with const casts?

    Read the article

  • SVN user guidelines

    - by Oliver Moran
    I have been tasked with writing a set of user guidelines for SVN for developers in my company. The guidelines are to be solely from a user perspective (e.g. commit comments, when to commit) and not from an administrative perspective (e.g. when to tag, how to structure). An administrative guideline will be written in a separate document. We are an app development house involved also in embedded development. So our developers range from HTML5 and Flash to Java and C. Some of our coding involves forking very large (millions of files) code bases. Other parts involve us engaging in ground-up development. Are there any best practices for use of SVN from a user (i.e. grunt developer) perspective?

    Read the article

  • Gradient boosting predictions in low-latency production environments?

    - by lockedoff
    Can anyone recommend a strategy for making predictions using a gradient boosting model in the <10-15ms range (the faster the better)? I have been using R's gbm package, but the first prediction takes ~50ms (subsequent vectorized predictions average to 1ms, so there appears to be overhead, perhaps in the call to the C++ library). As a guideline, there will be ~10-50 inputs and ~50-500 trees. The task is classification and I need access to predicted probabilities. I know there are a lot of libraries out there, but I've had little luck finding information even on rough prediction times for them. The training will happen offline, so only predictions need to be fast -- also, predictions may come from a piece of code / library that is completely separate from whatever does the training (as long as there is a common format for representing the trees).

    Read the article

  • How to manage css of big websites within team environment without mess?

    - by jitendra
    Where multiple people can work on same css. is it possible to follow semantic name rules even in large websites. If I would write all main css first time with semantic names . then what and how i should guideline/instruction to other developer to maintain css readability, validation . and to know quickly where other are adding their own css if required. Right now every one just go to down and write required css classes ot IDs at bottom. and most of the time they don't write semantic names.

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Windows Azure Evolution - Web Sites (aka Antares) Part 1

    - by Shaun
    This is the 3rd post of my Windows Azure Evolution series, focus on the new features and enhancement which was alone with the Windows Azure Platform Upgrade June 2012, announced at the MEET Windows Azure event on 7th June. In the first post I introduced the new preview developer portal and how to works for the existing features such as cloud services, storages and SQL databases. In the second one I talked about the Windows Azure .NET SDK 1.7 on the latest Visual Studio 2012 RC on Windows 8. From this one I will begin to introduce some new features. Now let’s have a look on the first one of them, Windows Azure Web Sites.   Overview Windows Azure Web Sites (WAWS), as known as Antares, was a new feature still in preview stage in this upgrade. It allows people to quickly and easily deploy websites to a highly scalable cloud environment, uses the languages and open source apps of the choice then deploy such as FTP, Git and TFS. It also can be integrated with Windows Azure services like SQL Database, Caching, CDN and Storage easily. After read its introduction we may have a question: since we can deploy a website from both cloud service web role and web site, what’s the different between them? So, let’s have a quick compare.   CLOUD SERVICE WEB SITE OS Windows Server Windows Server Virtualization Windows Azure Virtual Machine Windows Azure Virtual Machine Host IIS IIS Platform ASP.NET WebForm, ASP.NET MVC, WCF ASP.NET WebForm, ASP.NET MVC, PHP Language C#, VB.NET C#, VB.NET, PHP Database SQL Database SQL Database, MySQL Architecture Multi layered, background worker, message queuing, etc.. Simple website with backend database. VS Project Windows Azure Cloud Service ASP.NET Web Form, ASP.NET MVC, etc.. Out-of-box Gallery (none) Drupal, DotNetNuke, WordPress, etc.. Deployment Package upload, Visual Studio publish FTP, Git, TFS, WebMatrix Compute Mode Dedicate VM Shared Across VMs, Dedicate VM Scale Scale up, scale out Scale up, scale out As you can see, there are many difference between the cloud service and web site, but the main point is that, the cloud service focus on those complex architecture web application. For example, if you want to build a website with frontend layer, middle business layer and data access layer, with some background worker process connected through the message queue, then you should better use cloud service, since it provides full control of your code and application. But if you just want to build a personal blog or a  business portal, then you can use the web site. Since the web site have many galleries, you can create them even without any coding and configuration. David Pallmann have an awesome figure explains the benefits between the could service, web site and virtual machine.   Create a Personal Blog in Web Site from Gallery As I mentioned above, one of the big feature in WAWS is to build a website from an existing gallery, which means we don’t need to coding and configure. What we need to do is open the windows azure developer portal and click the NEW button, select WEB SITE and FROM GALLERY. In the popping up windows there are many websites we can choose to use. For example, for personal blog there are Orchard CMS, WordPress; for CMS there are DotNetNuke, Drupal 7, mojoPortal. Let’s select WordPress and click the next button. The next step is to configure the web site. We will need to specify the DNS name and select the subscription and region. Since the WordPress uses MySQL as its backend database, we also need to create a MySQL database as well. Windows Azure Web Sites utilize ClearDB to host the MySQL databases. You cannot create a MySQL database directly from SQL Databases section. Finally, since we selected to create a new MySQL database we need to specify the database name and region in the last step. Also we need to accept the ClearDB’s terms as well. Then windows azure platform will download the WordPress codes and deploy the MySQL database and website. Then it will be ready to use. Select the website and click the BROWSE button, the WordPress administration page will be shown. After configured the WordPress here is my personal web blog on the cloud. It took me no more than 10 minutes to establish without any coding.   Monitor, Configure, Scale and Linked Resources Let’s click into the website I had just created in the portal and have a look on what we can do. In the website details page where are five sections. - Dashboard The overall information about this website, such as the basic usage status, public URL, compute mode, FTP address, subscription and links that we can specify the deployment credentials, TFS and Git publish setting, etc.. - Monitor Some status information such as the CPU usage, memory usage etc., errors, etc.. We can add more metrics by clicking the ADD METRICS button and the bottom as well. - Configure Here we can set the configurations of our website such as the .NET and PHP runtime version, diagnostics settings, application settings and the IIS default documents. - Scale This is something interesting. In WAWS there are two compute mode or called web site mode. One is “shared”, which means our website will be shared with other web sites in a group of windows azure virtual machines. Each web site have its own process (w3wp.exe) with some sandbox technology to isolate from others. When we need to scaling-out our web site in shared mode, we actually increased the working process count. Hence in shared mode we cannot specify the virtual machine size since they are shared across all web sites. This is a little bit different than the scaling mode of the cloud service (hosted service web role and worker role). The other mode called “dedicate”, which means our web site will use the whole windows azure virtual machine. This is the same hosting behavior as cloud service web role. In web role it will be deployed on the virtual machines we specified and all of them are only used by us. In web sites dedicate mode, it’s the same. In this mode when we scaling-out our web site we will use more virtual machines, and each of them will only host our own website. And we can specify the virtual machine size in this mode. In the developer portal we can select which mode we are using from the scale section. In shared mode we can only specify the instance count, but in dedicate mode we can specify the instance size as well as the instance count. - Linked Resource The MySQL database created alone with the creation of our WordPress web site is a linked resource. We can add more linked resources in this section.   Pricing For the web site itself, since this feature is in preview period if you are using shared mode, then you will get free up to 10 web sites. But if you are using dedicate mode, the price would be the virtual machines you are using. For example, if you are using dedicate and configured two middle size virtual machines then you will pay $230.40 per month. If there is SQL Database linked to your web site then they will be charged separately based on the Pay-As-You-Go price. For example a 1GB web edition database costs $9.99 per month. And the bandwidth will be charged as well. For example 10GB outbound data transfer costs $1.20 per month. For more information about the pricing please have a look at the windows azure pricing page.   Summary Windows Azure Web Sites gives us easier and quicker way to create, develop and deploy website to window azure platform. Comparing with the cloud service web role, the WAWS have many out-of-box gallery we can use directly. So if you just want to build a blog, CMS or business portal you don’t need to learn ASP.NET, you don’t need to learn how to configure DotNetNuke, you don’t need to learn how to prepare PHP and MySQL. By using WAWS gallery you can establish a website within 10 minutes without any lines of code. But in some cases we do need to code by ourselves. We may need to tweak the layout of our pages, or we may have a traditional ASP.NET or PHP web application which needed to migrated to the cloud. Besides the gallery WAWS also provides many features to download, upload code. It also provides the feature to integrate with some version control services such as TFS and Git. And it also provides the deploy approaches through FTP and Web Deploy. In the next post I will demonstrate how to use WebMatrix to download and modify the website, and how to use TFS and Git to deploy automatically one our code changes committed.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Advanced Data Source Engine coming to Telerik Reporting Q1 2010

    This is the final blog post from the pre-release series. In it we are going to share with you some of the updates coming to our reporting solution in Q1 2010. A new Declarative Data Source Engine will be added to Telerik Reporting, that will allow full control over data management, and deliver significant gains in rendering performance and memory consumption. Some of the engines new features will be: Data source parameters - those parameters will be used to limit data retrieved from the data source to just the data needed for the report. Data source parameters are processed on the data source side, however only queried data is fetched to the reporting engine, rather than the full data source. This leads to lower memory consumption, because data operations are performed on queried data only, rather than on all data. As a result, only the queried data needs to be stored in the memory vs. the whole dataset, which was the case with the old approach Support for stored procedures - they will assist in achieving a consistent implementation of logic across applications, and are especially practical for performing repetitive tasks. A stored procedure stores the SQL statements and logic, which can then be executed in different reports and/or applications. Stored Procedures will not only save development time, but they will also improve performance, because each stored procedure is compiled on the data base server once, and then is reutilized. In Telerik Reporting, the stored procedure will also be parameterized, where elements of the SQL statement will be bound to parameters. These parameterized SQL queries will be handled through the data source parameters, and are evaluated at run time. Using parameterized SQL queries will improve the performance and decrease the memory footprint of your application, because they will be applied directly on the database server and only the necessary data will be downloaded on the middle tier or client machine; Calculated fields through expressions - with the help of the new reporting engine you will be able to use field values in formulas to come up with a calculated field. A calculated field is a user defined field that is computed "on the fly" and does not exist in the data source, but can perform calculations using the data of the data source object it belongs to. Calculated fields are very handy for adding frequently used formulas to your reports; Improved performance and optimized in-memory OLAP engine - the new data source will come with several improvements in how aggregates are calculated, and memory is managed. As a result, you may experience between 30% (for simpler reports) and 400% (for calculation-intensive reports) in rendering performance, and about 50% decrease in memory consumption. Full design time support through wizards - Declarative data sources are a great advance and will save developers countless hours of coding. In Q1 2010, and true to Telerik Reportings essence, using the new data source engine and its features requires little to no coding, because we have extended most of the wizards to support the new functionality. The newly extended wizards are available in VS2005/VS2008/VS2010 design-time. More features will be revealed on the product's what's new page when the new version is officially released in a few days. Also make sure you attend the free webinar on Thursday, March 11th that will be dedicated to the updates in Telerik Reporting Q1 2010. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Review: ComponentOne Studio for Entity Framework

    - by Tim Murphy
    While I have always been a fan of libraries that improve coding efficiency and reduce code redundancy I have mostly been using ones that were in the public domain.  As part of the Geeks With Blogs Influencers program a got my hands on ComponentOne’s Studio for Entity Framework.  Below are my thought after working with the product for several weeks. My coding preference has always been maintainable code that is reusable across an enterprises protfolio.  Because of this my focus in reviewing this product is less on the RAD components and more on its benefits for layered applications using code first Entity Framework. Before we get into the pros and cons here is a summary of the main feature listed for SEF. Unified Data Context Virtual Data Access More Powerful Data Binding Pros The first thing that I found to my liking is the C1DataSource. It basically manages a cache for your Entity Model context.  Under RAD conditions this is setup automatically when you drop the object on a your design surface.  If you are like me and want to abstract you data management into a library it takes a little more work, but it is still acceptable and gains the same benefits. The second feature that I found beneficial is the definition of views with improved sorting and filtering.  Again the ease of use of these features is greater on the RAD side but no capabilities are missing when manipulating object in code. Linq has become my friend over the last couple of years and it was great to see that ComponentOne had ensured that it remained a first class citizen in their design.  When you look into this product yourself I would suggest taking a dive into LiveLinq which allow the joining of different data source types. As I went through discovering the features of this framework I appreciated the number of examples that they supplied for different uses.  Besides showing how to use SEF with WinForms, WPF and Silverlight they also showed how to accomplish tasks both RAD, code only and MVVM approaches. Cons The only area that I would really like to see improvement is in there level of detail in their documentation.  Specifically I would like to have seen some of the supporting code explained, such as what some supporting object did, in the examples instead of having to go to the programmer’s reference. I did find some times where currently existing projects had some trouble determining scope that the RAD controls were allowed, but I expect this is something that is in part end user related. Summary Overall I found the Studio for Entity Framework capable and well thought out.  If you are already using the Entity Framework this product will fit into your environment with little effort in return for greater flexibility and greater robustness in your solutions. Whether the $895 list price for a standard version works for you will depend on your return on investment. Smaller companies with only a small number of projects may not be able to stomach it, you get a full featured product that is supported by a well established company.  The more projects and the more code you have the greater your return on investment will be. Personally I intend to apply this product to some production systems and will probably have some tips and tricks in the future. del.icio.us Tags: ComponentOne,Studio for Entity Framework,Geeks With Blogs,Influencers,Product Reviews

    Read the article

  • Iron Speed Designer 7.0 - the great gets greater!

    - by GGBlogger
    For Immediate Release Iron Speed, Inc. Kelly Fisher +1 (408) 228-3436 [email protected] http://www.ironspeed.com       Iron Speed Version 7.0 Generates SharePoint Applications New! Support for Microsoft SharePoint speeds application generation and deployment   San Jose, CA – June 8, 2010. Software development tools-maker Iron Speed, Inc. released Iron Speed Designer Version 7.0, the latest version of its popular Web 2.0 application generator. Iron Speed Designer generates rich, interactive database and reporting applications for .NET, Microsoft SharePoint and the Cloud.    In addition to .NET applications, Iron Speed Designer V7.0 generates database-driven SharePoint applications. The ability to quickly create database-driven applications for SharePoint eliminates a lot of work, helping IT departments generate productivity-enhancing applications in just a few hours.  Generated applications include integrated SharePoint application security and use SharePoint master pages.    “It’s virtually impossible to build database-driven application in SharePoint by hand. Iron Speed Designer V7.0 not only makes this possible, the tool makes it easy.” – Razi Mohiuddin, President, Iron Speed, Inc.     Integrated SharePoint application security Generated applications include integrated SharePoint application security. SharePoint sites and their groups are used to retrieve security roles. Iron Speed Designer validates the user against a Microsoft SharePoint server on your network by retrieving the logged in user’s credentials from the SharePoint Context.    “The Iron Speed Designer generated application integrates seamlessly with SharePoint security, removing the hassle of designing, testing and approving your own security layer.” -Michael Landi, Solutions Architect, Light Speed Solutions     SharePoint Solution Packages Iron Speed Designer V7.0 creates SharePoint Solution Packages (WSPs) for easy application deployment. Using the Deployment Wizard, a single application WSP is created and can be deployed to your SharePoint server.   “Iron Speed Designer is the first product on the market that allows easy and painless deployment of database-driven .NET web applications inside the SharePoint environment.” -Bryan Patrick, Developer, Pseudo Consulting     SharePoint master pages and themes In V7.0, generated applications use SharePoint master pages and contain the same content as other SharePoint pages. Generated applications use the current SharePoint color scheme and display standard SharePoint navigation controls on each page.   “Iron Speed Designer preserves the look and feel of the SharePoint environment in deployed database applications without additional hand-coding.” -Kirill Dmitriev, Software Developer, Iron Speed, Inc.     Iron Speed Designer Version 7.0 System Requirements Iron Speed Designer Version 7.0 runs on Microsoft Windows 7, Windows Vista, Windows XP, and Windows Server 2003 and 2008. It generates .NET Web applications for Microsoft SQL Server, Oracle, Microsoft Access and MySQL. These applications may be deployed on any machine running the .NET Framework. Iron Speed Designer supports Microsoft SharePoint 2007 and Windows SharePoint Services (WSS3). Find complete information about Iron Speed Designer Version 7.0 at www.ironspeed.com.     About Iron Speed, Inc. Iron Speed is the leader in enterprise-class application generation. Our software development tools generate database and reporting applications in significantly less time and cost than hand-coding. Our flagship product, Iron Speed Designer, is the fastest way to deliver applications for the Microsoft .NET and software-as-a-service cloud computing environments.   With products built on decades of experience in enterprise application development and large-scale e-commerce systems, Iron Speed products eliminate the need for developers to choose between "full featured" and "on schedule."   Founded in 1999, Iron Speed is well funded with a capital base of over $20M and strategic investors that include Arrow Electronics and Avnet, as well as executives from AMD, Excelan, Onsale, and Oracle. The company is based in San Jose, Calif., and is located online at www.ironspeed.com.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >