Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 576/2416 | < Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >

  • My View on ASP.NET Web Forms versus MVC

    - by Ricardo Peres
    Introduction A lot has been said on Web Forms and MVC, but since I was recently asked about my opinion on the subject, here it is. First, I have to say that I really like both technologies and I don’t think any is going away – just remember SharePoint, which is built on top of Web Forms. I see them as complementary, targeting different needs and leveraging different skills. Let’s go through some of their differences. Rapid Application Development Rapid Application Development (RAD) is the development process by which you have an Integrated Development Environment (IDE), a visual design surface and a toolbox, and you drag components from the toolbox to the design surface and set their properties through a property inspector. It was introduced with some of the earliest Windows graphical IDEs such as Visual Basic and Delphi. With Web Forms you have RAD out of the box. Visual Studio offers a generally good (and extensible) designer for the layout of pages and web user controls. Designing a page may simply be about dragging controls from the toolbox, setting their properties and wiring up some events to event handlers, which are implemented in code behind .NET classes. Most people will be familiar with this kind of development and enjoy it. You can see what you are doing from the beginning. MVC also has designable pages – called views in MVC terminology – the problem is that they can be built using different technologies, some of which, at the moment (MVC 4) do not support RAD – Razor, for example. I believe it is just a matter of time for that to be implemented in Visual Studio, but it will mostly consist on HTML editing, and until that day comes, you have to live with source editing. Development Model Web Forms features the same development model that you are used to from Windows Forms and other similar technologies: events fired by controls and automatic persistence of their properties between postbacks. For that, it uses concepts such as view state, which some may love and others may hate, because it may be misused quite easily, but otherwise does its job well. Another fundamental concept is data binding, by which a collection of data can be fed to a control and have it render that data somehow – just thing of the GridView control. The focus is on the page, that’s where it all starts, and you can place everything in the same code behind class: data access, business logic, layout, etc. The controls take care of generating a great part of the HTML and JavaScript for you. With MVC there is no free lunch when it comes to data persistence between requests, you have to implement it yourself. As for event handling, that is at the core of MVC, in the form of controllers and action methods, you just don’t think of them as event handlers. In MVC you need to think more in HTTP terms, so action methods such as POST and GET are relevant to you, and may write actions to handle one or the other. Also of crucial importance is model binding: the way by which MVC converts your posted data into a .NET class. This is something that ASP.NET 4.5 Web Forms has introduced as well, but it is a cornerstone in MVC. MVC also has built-in validation of these .NET classes, which out of the box uses the Data Annotations API. You have full control of the generated HTML - except for that coming from the helper methods, usually small fragments - which requires a greater familiarity with the specifications. You normally rely much more on JavaScript APIs, they are even included in the Visual Studio template, that is because much less is done for you. Reuse It is difficult to accept a professional company/project that does not employ reuse. It can save a lot of time thus cutting costs significantly. Code reused in several projects matures as time goes by and helps developers learn from past experiences. ASP.NET Web Forms was built with reuse in mind, in the form of controls. Controls encapsulate functionality and are generally portable from project to project (with the notable exception of web user controls, those with an associated .ASCX markup file). ASP.NET has dozens of controls and it is very easy to develop new ones, so I believe this is a great advantage. A control can inject JavaScript code and external references as well as generate HTML an CSS. MVC on the other hand does not use controls – it is possible to use them, with some view engines like ASPX, but it is just not advisable because it breaks the flow – where do Init, Load, PreRender, etc, fit? The most similar to controls is extension methods, or helpers. They serve the same purpose – generating HTML, CSS or JavaScript – and can be reused between different projects. What differentiates them from controls is that there is no inheritance and no context – an extension method is just a static method which doesn’t know where it is being called. You also have partial views, which you can reuse in the same project, but there is no inheritance as well. This, in my view, is a weakness of MVC. Architecture Both technologies are highly extensible. I have writtenstarted writing a series of posts on ASP.NET Web Forms extensibility and will probably write another series on MVC extensibility as well. A number of scenarios are covered in any of these models, and some extensibility points apply to both, because, of course both stand upon ASP.NET. With Web Forms, if you’re like me, you start by defining you master pages, pages and controls, with some helper classes to glue everything. You may as well throw in some JavaScript, but probably you’re main work will be with plain old .NET code. The controls you define have the chance to inject JavaScript code and references, through either the ScriptManager or the page’s ClientScript object, as well as generating HTML and CSS code. The master page and page model with code behind classes offer a number of “hooks” by which you can change the normal way of things, for example, in a page you can access any control on the master page, add script or stylesheet references to its head and even change the page’s title. Also, with Web Forms, you typically have URLs in the form “/SomePath/SomePage.aspx?SomeParameter=SomeValue”, which isn’t really SEO friendly, no to mention the HTML that some controls produce, far from standards, optimization and best practices. In MVC, you also normally start by defining the master page (or layout) and views, which are the visible parts, and then define controllers on separate files. These controllers do not know anything about the views, except the names and types of the parameters that will be passed to and from them. The controller will be responsible for the data access and business logic, eventually relying on additional classes for this purpose. On a controller you only receive parameters and return a result, which may be a request for the rendering of a view, a redirection to another URL or a JSON object, to name just a few. The controller class does not know anything about the web, so you can effectively reuse it in a non-web project. This separation and the lack of programmatic access to the UI elements, makes it very difficult to implement, for example, something like SharePoint with MVC. OK, I know about Orchard, but it isn’t really a general purpose development framework, but instead, a CMS that happens to use MVC. Not having controls render HTML for you gives you in turn much more control over it – it is your responsibility to create it, which you can either consider a blessing or a curse, in the later case, you probably shouldn’t be using MVC at all. Also MVC URLs tend to be much more SEO-oriented, if you design your controllers and actions properly. Testing In a well defined architecture, you should separate business logic, data access logic and presentation logic, because these are all different things and it might even be the need to switch one implementation for another: for example, you might design a system which includes a data access layer, a business logic layer and two presentation layers, one on top of ASP.NET and the other with WPF; and the data access layer might be implemented first using NHibernate and later on switched for Entity Framework Code First. These changes are not that rare, so care should be taken in designing the system to make them possible. Web Forms are difficult to test, because it relies on event handlers which are only fired in web contexts, when a form is submitted or a page is requested. You can call them with reflection, but you have to set up a number of mocking objects first, HttpContext.Current first coming to my mind. MVC, on the other hand, makes testing controllers a breeze, so much that it even includes a template option for generating boilerplate unit test classes up from start. A well designed – from the unit test point of view - controller will receive everything it needs to work as parameters to its action methods, so you can pass whatever values you need very easily. That doesn’t mean, of course, that everything can be tested: views, for instance, are difficult to test without actually accessing the site, but MVC offers the possibility to compile views at build time, so that, at least, you know you don’t have syntax errors beforehand. Myths Some popular but unfounded myths around MVC include: You cannot use controls in MVC: not true, actually, you can, at least with the Web Forms (ASPX) view engine; the declaration and usage is exactly the same as with Web Forms; You cannot specify a base class for a view: with the ASPX view engine you can use the Inherits Page directive, with this and all the others you can use the pageBaseType and userControlBaseType attributes of the <page> element; MVC shields you from doing “bad things” on your views: well, you can place any code on a code block, at least with the ASPX view engine (you may be starting to see a pattern here), even data access code; The model is the entity model, tied to an O/RM: the model is actually any class that you use to pass values to a view, including (but generally not recommended) an entity model; Unit tests come with no cost: unit tests generally don’t cover the UI, although there are frameworks just for that (see WatiN, for example); also, for some tests, you will have to mock or replace either the HttpContext.Current property or the HttpContextBase class yourself; Everything is testable: views aren’t, without accessing the site; MVC relies on HTML5/some_cool_new_javascript_framework: there is no relation whatsoever, MVC renders whatever you want it to render and does not require any framework to be present. The thing is, the subsequent releases of MVC happened in a time when Microsoft has become much more involved in standards, so the files and technologies included in the Visual Studio templates reflect this, and it just happens to work well with jQuery, for example. Conclusion Well, this is how I see it. Some folks may think that I am being too rude on MVC, probably because I don’t like it, but that’s not true: like I said, I do like MVC and I am starting my new projects with it. I just don’t want to go along with that those that say that MVC is much superior to Web Forms, in fact, some things you can do much more easily with Web Forms than with MVC. I will be more than happy to hear what you think on this!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Four Tutorial for SQL Server 2012 New Features

    - by pinaldave
    One of the very common question I receive on my facebook is that if there is any tutorial for SQL Server 2012 new enhanced features and solutions. I see this demand a bit increasing as the SQL Server 2012 is more and more being adopted. Here is the list of four tutorial which is specifically created for SQL Server 2012 by Microsoft. Multidimensional Modeling (Adventure Works Tutorial) This tutorial teaches you how to develop and deploy an Analysis Services project that enables the employees of Adventure Works Cycles to analyze various aspects of their business. Tabular Modeling (Adventure Works Tutorial) This tutorial teaches you how to create a SQL Server 2012 Analysis Services tabular model that enable sales and marketing teams to easily analyze internet sales data in the AdventureWorksDW2012 data warehouse. You will build the tabular model in SQL Server Data Tools. Tutorials and Demos for Power View Create Power View reports and explore Power View features. View demos, videos, and tutorials that help you get started quickly with Power View and successfully build reports with interactive filters and visualizations such as bubble charts, tiles, and cards. Tutorial: Using the hierarchyid Data Type This tutorial is intended for users who are experienced with Transact-SQL, but are new to the hierarchyid data type. In this tutorial, you convert an existing table to a hierarchical structure, and you also create a new table to store and manage hierarchical data efficiently. Note: The description of the course is taken from original course description. You will need to install SQL Server 2012 AdventureWorks for all this tutorial. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • What is a “pretty and proper OO” way for handling sessions and authentication?

    - by asdfqwer
    Is coupling these two concepts a bad approach? As of right now I'm delegating all session handling and whether or not a user desires to logout in my config.inc file. As I was writing my Auth class I started wondering whether or not my Auth class should be taking care of most of the logic in my config.inc. Regardless, I'm sure there's a more elegant way of handling this... Here is what I have in my config.inc (also a large chunk of this code is based on a reply I found on SO except I can't find the source ._.): ini_set('session.name', 'SID'); # session management session_set_cookie_params(24*60*60); // set SID cookie lifetime session_start(); if(isset($_SESSION['LOGOUT']) { session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data #header('Location: '.DOCROOT); } elseif(isset($_SESSION['SID_AUTH'])) { // verify user has authenticated if (!isset($_SESSION['SID_CREATED'])) { $_SESSION['SID_CREATED'] = time(); } elseif (time() - $_SESSION['SID_CREATED'] > 6*60*60) { // session started more than 6 hours ago session_regenerate_id(); // reset SID value $_SESSION['SID_CREATED'] = time(); // update creation time } if (isset($_SESSION['SID_MODIFIED']) && (time() - $_SESSION['SID_MODIFIED'] > 12*60*60)) { // last request was more than 12 hours ago session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data } $_SESSION['SID_MODIFIED'] = time(); // update last activity time stamp }

    Read the article

  • Interactive Reporting with BI Publisher 11G

    - by kanichiro.nishida
    One of the new features that came out with BI Publisher 11G and made me really excited about is the Interactive Viewer, which allows you to interact with the data presented in the reports and gain more insights about the data. You can have as many Table, Chart, Pivot Table, Gauge components in a single report and all of them are linked together so that you can click on any data point on the components such as Chart, Pivot Table, Gauge, and that would refresh other components in the report to reflect the selection without refreshing the whole report or page. No longer you need to navigate to another report or open other reports to see related data or drill down to the detail data. It’s all there within a single report. Well, sounds cliché but really this is a typical case of ‘seeing is believing’, so instead of reading I would love you guys to take a look so I’ve recorded the following video. Please take a look !   The above video shows you the power of the BI Publisher Interactive Viewer with a real data from San Francisco Airport. The lists shown at the top or the left are one of the new features that we’ll be introducing in coming future very soon, but everything else is there with 11.1.1.3. So why not start today ? And please share your feedback with us!

    Read the article

  • links for 2010-05-06

    - by Bob Rhubart
    Podcast: Collaborate 10 Wrap-Up - Conclusion #c10 More Collaborate 2010 Las Vegas highlights and hijinks from this ten-member panel, including OAUG and ODTUG board members, members of the Oracle ACE program, and OAUG President Dave Ferguson. (tags: otn oracle collaborate2010) Peter Scott: Realtime Data Warehouse Loading Rittman-Mead's Peter Scott looks at putting data in to a data warehouse in real time. (tags: oracle datawarehousing businessintelligence) Live Webcast: Social BPM - Integrating Enterprise 2.0 with Business Applications - May 12, 2010 at 11:00 a.m. PT Business Process Management with integrated Enterprise 2.0 collaboration can improve business responsiveness and enhance overall enterprise productivity. Learn how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. (tags: oracle otn bpm enterprise2.0 webcast) Management Pack for Identity Management Viewlet A screencast produced by the Grid Control team showing the features of the Identity Management Pack for Grid Control 11g. Grid Control 11g now works with Oracle Virtual Directory 11g. (tags: oracle otn security identitymanagement) @pevansgreenwood: Having too much SOA is a bad thing (and what we might do about it) "The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software." -- Peter Evans-Greenwood (tags: soa complexity flexibility) @vampbenepe: Integration patterns for social data: the Open Social Data Bus "The main point is about defining the right integration pattern for social data: is it a 'message bus' pattern or a 'shared database' pattern?" -- William Vampbenepe (tags: oracle otn enterprise2.0 enterprisearchitecture)

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • How to Transpose Rows and Columns in Excel 2013

    - by Lori Kaufman
    You’ve setup your worksheet with all your row and column headings and you’ve entered all your data. Then, you discover that it would look better if the rows were the columns and the columns were the rows. How do you accomplish this easily? There is an easy way to convert your rows to columns and vice versa using the Transpose feature in Excel. We’ll show you how. Select the cells containing the headings and data you want to transpose. Click Copy or press Ctrl + C. Click in a blank cell on the spreadsheet. This cell will be the top, left corner of the new table of data. Click the down arrow on the Paste button and select Paste Special from the drop-down menu. On the Paste Special dialog box, select the Transpose check box so there is a check mark in the box and click OK. The rows become the columns and the columns become the rows. The original set of data still exists. You can select those cells and delete the headings and data, if desired. Isn’t that a lot easier and faster than retyping all your data?     

    Read the article

  • Oracle MDM at the MDM Summit in San Francisco

    - by David Butler
    Oracle is sponsoring the Product MDM track at this year’s MDM & Data Governance San Francisco Summit. Sachin Patel, Director of Product Strategy, Product Hub Applications, at Oracle will present the keynote: Product Master Data Management for Today’s Enterprise. Here’s the abstract: Today businesses struggle to boost operational efficiency and meet new product launch deadlines due to poor and cumbersome administrative processes. One of the primary reasons enterprises are unable to achieve cohesion is due to various domain silos and fragmented product data. This adversely affects business performance including, but not limited to, excess inventories, under-leveraged procurement spend, downstream invoicing or order errors and lost sales opportunities. In this session, you will learn the key elements and business processes that are required for you to master an enterprise product record. Additionally you will gain insights into how to improve the accuracy of your data and deliver reliable and consistent product information across your enterprise. This provides a high level of confidence that business managers can achieve their goals. In this session, you will understand how adopting a Master Data Management strategy for product information can help your enterprise change course towards a more profitable, competitive and successful business. Cisco Systems will join Sachin and cover their experiences, lessons learned and best practices. If you are in the Bay Area and interested in mastering your product data for the benefit of multiple applications, business processes and analytical systems, please join us at the Hyatt, Fisherman’s Wharf this Thursday, June 30th.

    Read the article

  • Ajax application: using SOAP vs REST ?

    - by coder
    I'm building an ajax heavy application (client-side strictly html/css/js) which will be getting all the data and using server business logic via webservices. I know REST seems to be the hot topic but I can't find any good arguments. The main argument seems to be its "light-weight". My impression so far is that wsdl/soap based services are more expressive and allow for more a more complex transfer of data. It appears that soap would be more useful in the application I'm building where the only code consuming the services will be the js downloaded in the client browser. REST on the other hand seems to have a smaller entry barrier and so can be more useful for services like twitter in allowing other developers to consume these services easily. Also, REST seems to Te better suited for simple data transfers. So in summary SOAP is useful for complex data transfer and REST is useful in simple data transfer. I'm currently under the impression that using SOAP would be best due to the complexity of the messages but perhaps there's other factors. What are your thoughts on the pros/cons of soap/rest for a heavy ajax web app? EDIT: While the wsdl is in xml, the data I'm transferring back and forth is actually in JSON. It just appears more natural to use wsdl/soap here due to the nature of the app. The verbs GET and POST may not be enough. I may want to say something like: processQueue, or executeTimer. This is why my conclusion has been wsdl/soap would be good for bridging a complex layer between two applications (client and server) whereas REST would be better (due to its simplicity) for allowing many developer-users to consume resources programmatically. So you could say the choice falls along two lines Will the app be verb-oriented (completing tasks: use soap) or noun-oriented (consuming resources: use REST) Will the api be consumed by few developers or many developers (REST is strong for many developers)? Since such an ajax heavy app would potentially use many verbs and would only be used by the client developer it appears soap/wsdl would be the best fit.

    Read the article

  • XNA Easy Storage XBOX 360 High Scores

    - by user1003211
    To followup from a previous query - I need some help with the implementation of easystorage high scores, which is bringing up some errors on the xbox. I get the prompt screen, a savedevice is selected and a file are all created! However the file remains empty, (I've tried prepopulating but still get errors). The full portions of the scoring code can be found here: http://pastebin.com/74v897Yt The current issue in particular is in LoadHighScores() - "There is an error in XML document (0, 0)." under line data = (HighScoreData)serializer.Deserialize(stream); I'm not sure whether this line is correct either: HighScoreData data = new HighScoreData(); public static HighScoreData LoadHighScores(string container, string filename) { HighScoreData data = new HighScoreData(); if (Global.SaveDevice.FileExists(container, filename)) { Global.SaveDevice.Load(container, filename, stream => { File.Open(Global.fileName_options, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream); } finally { // Close the file stream.Close(); // stream.Dispose(); } }); } return (data); } I call: PromptMe(); when the Start button is pressed at the beginning. I call: if (Global.SaveDevice.IsReady){entries = LoadHighScores(HighScoresContainer, HighScoresFilename);} during the menu screen to try and display the highscore screen. I call: SaveHighScore(); when game ends. I've tried altering the struct code to a class but still no luck. Any help greatly appreciated.

    Read the article

  • Using ASP.NET C# and Javascript

    - by ctck
    I'm looking for the most efficient / standardized way of passing data between client javascript code and C# code behind in an ASP.NET application. Currently ive been using the following methods to achieve this but they all feel a bit like a fudge. The way i pass data from javascript to the C# code behind is by setting hidden asp variables and triggering a postback <asp:HiddenField ID="RandomList" runat="server" /> function SetDataField(data) { document.getElementById('<%=RandomList.ClientID%>').value = data; } Then in C# code i collect the list protected void GetData(object sender, EventArgs e) { var _list = RandomList.value; } Going back the other way i often use either scriptmanager to register a function and pass it data during Page_Load: ScriptManager.RegisterStartupScript(this.GetType(), "Set","get("Test();",true); or i add attributes to controls before a post back or during Initialization / pre rendering stages: Btn.Attributes.Add("onclick", "DisplayMessage("Hello");"); These methods have served me well and do the job. However they just dont feel complete. Is there a more standardized way of passing data between client side markup / javascript and backend code. Ive seen some posts like this one: Injecting JavaScrip : StackOverflow that describe HtmlElement class. Is this something is should look into? Thanks everyone for your time.

    Read the article

  • MCSE and MCSA makes a return to the world of certification..... but not as you know it.

    - by Testas
    Quick announcementMicrosoft Learning today announced the certification tracks for the upcoming SQL Server 2012 exams.You begin by acheiving the MCSA - Microsoft Certified Solutions Associate (Not to be confused by the old Microsoft Certified System Administrator)If you are starting out this includes taking the following three exams:Exam 70-461: Querying Microsoft SQL Server 2012Exam 70-462: Administering Microsoft SQL Server 2012 DatabasesExam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012If you have an MCTS in SQL Server 2008 already you can take the following pathA pass in a SQL Server 2008 (MCTS) Microsoft Certified Technology Specialist examExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Once you have achieved you MCSA status you can then start for your MCSE - Microsoft Certified Solutions Expert certificationYou have a choice, to do the MCSE: SQL Server 2012 Data Platform, MCSE: SQL Server 2012 Business Intelligence or you could do bothMCSE: SQL Server 2012 Data Platform involvesObtain your SQL Server 2012 MCSAExam 70-464: Developing Microsoft SQL Server 2012 DatabasesExam 70-465: Designing Database Solutions for Microsoft SQL Server 2012There is also an upgrade pathA pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Database Administrator or Database Developer CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-459: transisitioning your MCITP on SQL Server 2008 Database Administrator or Database Developer to MCSE:Data PlatformMCSE: SQL Server 2012 Business Intelligence involvesObtain your SQL Server 2012 MCSAExam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012The upgrade path involves:A pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Business Intelligence CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-460: transisitioning your MCITP on SQL Server 2008 Business Intelligence Developer to MCSE:Business IntelligenceAs a result if you want to achieve the MCSE in either Data Platform or Business Intelligence and you are starting from scratch there will be 5 exams to takeIf you have the ability to upgrade your certification because you have an MCITP already then it will be three examsFull details and questions can be found at http://www.microsoft.com/learning/en/us/certification/cert-sql-server.aspxThanksChris

    Read the article

  • Lösungen zum Anfassen – die Oracle Demo-Plattform

    - by A&C Redaktion
    Mit der neuen Demo-Plattform möchte Oracle den schnellen Zugang zu vorbereiteten Demo-Umgebungen anbieten. Denn manchmal sagt eine kurze Demonstration mehr, als tausend Erklärungsversuche. Oracle hat daher eine Demo-Plattform eingerichtet, auf der laufend neue Lösungen und Produkte anschaulich vorgeführt werden. Dabei geht es nicht um die theoretischen Möglichkeiten, sondern um ganz praktische Problemfälle – und wie diese bewältigt werden. Das aktuelle Thema ist Database Security am Beispiel der E-Business Suite – ein Thema, das so mancher Partner im Kundengespräch gut gebrauchen kann. In der folgenden Demo-Umgebung können Sie die Datenbank-Sicherheitsfunktionen wie die transparente Verschlüsselung von Applikationsdaten (hier am Beispiel E-Business Suite – es funktioniert aber auch mit SAP oder anderen Anwendungen) und das Rechtekonzept für Anwender und DBAs Ihren Kunden direkt vorstellen. In der Demo können Sie die Funktionalität von Oracle Database Vault, Oracle Advanced Security, Security Option und Oracle Label Security erläutern. Oracle Advanced Security Address Industry and Privacy Regulations with Encryption Protect Application Data with Transparent Data Encryption Encrypt Data on the Network Oracle Database Vault Increase Security For Data Consolidation and Out-Sourced Administration Protect Application Data with Privileged User Controls  Enforce Multi-factor Authorization and Separation of Duty Oracle Label Security Use Security Groups to control data access Assign OLS attributes to application, not necessarily database, users Jede Demo stellt Ihnen einen beispielhaften Demo-Guide zur Verfügung, an dem Sie sich orientieren können. Dies ist der direkte Weg zur Demo-Plattform, auf der Sie für Ihre eigenen Lernzwecke die Demo anschauen können sowie auch einen Zeitraum für Kundenpräsentationen reservieren können. 

    Read the article

  • Lösungen zum Anfassen – die Oracle Demo-Plattform

    - by A&C Redaktion
    Mit der neuen Demo-Plattform möchte Oracle den schnellen Zugang zu vorbereiteten Demo-Umgebungen anbieten. Denn manchmal sagt eine kurze Demonstration mehr, als tausend Erklärungsversuche. Oracle hat daher eine Demo-Plattform eingerichtet, auf der laufend neue Lösungen und Produkte anschaulich vorgeführt werden. Dabei geht es nicht um die theoretischen Möglichkeiten, sondern um ganz praktische Problemfälle – und wie diese bewältigt werden. Das aktuelle Thema ist Database Security am Beispiel der E-Business Suite – ein Thema, das so mancher Partner im Kundengespräch gut gebrauchen kann. In der folgenden Demo-Umgebung können Sie die Datenbank-Sicherheitsfunktionen wie die transparente Verschlüsselung von Applikationsdaten (hier am Beispiel E-Business Suite – es funktioniert aber auch mit SAP oder anderen Anwendungen) und das Rechtekonzept für Anwender und DBAs Ihren Kunden direkt vorstellen. In der Demo können Sie die Funktionalität von Oracle Database Vault, Oracle Advanced Security, Security Option und Oracle Label Security erläutern. Oracle Advanced Security Address Industry and Privacy Regulations with Encryption Protect Application Data with Transparent Data Encryption Encrypt Data on the Network Oracle Database Vault Increase Security For Data Consolidation and Out-Sourced Administration Protect Application Data with Privileged User Controls  Enforce Multi-factor Authorization and Separation of Duty Oracle Label Security Use Security Groups to control data access Assign OLS attributes to application, not necessarily database, users Jede Demo stellt Ihnen einen beispielhaften Demo-Guide zur Verfügung, an dem Sie sich orientieren können. Dies ist der direkte Weg zur Demo-Plattform, auf der Sie für Ihre eigenen Lernzwecke die Demo anschauen können sowie auch einen Zeitraum für Kundenpräsentationen reservieren können. 

    Read the article

  • Where did ULSTraceLog go to in the SharePoint 2010 Logging Database?

    - by Jan Tielens
    The Logging Database is one of the many new concepts that will make the life of many SharePoint administrators quite a bit more enjoyable. In SharePoint 2007 the Unified Logging System (ULS) logged all of its data to text files, typically found on your SharePoint server in 12\LOGS. We still have that in SharePoint 2010, but besides those text files, ULS can also write the data to a database! The advantages are obvious: easy to query, one central location for all servers in the farm, easy to build reports etc. You can find this ULS data in the SharePoint 2010 logging database (typically called WSS_Logging), in the view ULSTraceLog. Quite recently on one of my demo machines (standalone installation on Windows 7) I noticed the ULSTraceLog view was not available in the logging database. It turned out that there is a Timer Job that’s responsible for writing the data to the database, when the Timer Job hasn’t executed, the view is not there (the first time it executes, the view is created). Even more, the timer job was disabled, so the view would never be created, nor any data would be written to the database. If you encounter this situation as well, it’s quite easy to solve: Open the SharePoint Central Administration site Navigate to the Monitoring section Select Review Job Definitions Click on the job with the name Diagnostic Data Provider: Trace Log Click on the Enable button to enable it Optionally click on Run Now afterwards, to start it immediately There you go, the ULSTraceLog will be created and the ULS messages will appear in the database!

    Read the article

  • Affect movieclip scale from a .as doc to another

    - by Madcowe
    I've been working on a game following a tutorial on the internet, the game is an avoider where you have the Avatar, that has to avoid the objects that fall. The way it is made is: I have a DocumentClass which addChild's the screen you should be seeing and removeChild's the screen that you were. For example: first it loads the menuScreen, then when you press play unloads menu and loads playscreen. When you die it loads the gameoverScreen and loads the playscreen. And from the gameOverScreen you can press the SHOP button to go to the shop. From here on I'm on my own and not following any tutorials. The shop has a button that is supposed to alter the Avatar's X and Y scale to 0.5, but the problem is: how do I make that work? I tried creating a sharedObject.data.avatarSize, on the store's size button the code would be something like: sharedObject.data.avatarSize *= 0.5; And on the AvoiderGame.as, which is the most of the actual game, on the part where the avatar is created I tried putting this after it's creation: scaleX.avatar = sharedObject.data.avatarSize; scaleY.avatar = sharedObject.data.avatarSize; This did not work since it gives me the error 1009 saying can't access something that is null. I tried this before "using" the sharedObject: if( sharedObject.data.avatarSize == null ) { sharedObject.data.avatarSize = 1; } But it did not work... So now I'm not sure on what to do. I know we should reduce global variables as much as we can but how do I do it? Also, if it helps, I'm using Flash CS5 and working with AS3.0

    Read the article

  • How do graphics programmers deal with rendering vertices that don't change the image?

    - by canisrufus
    So, the title is a little awkward. I'll give some background, and then ask my question. Background: I work as a web GIS application developer, but in my spare time I've been playing with map rendering and improving data interchange formats. I work only in 2D space. One interesting issue I've encountered is that when you're rendering a polygon at a small scale (zoomed way out), many of the vertices are redundant. An extreme case would be that you have a polygon with 500,000 vertices that only takes up a single pixel. If you're sending this data to the browser, it would make sense to omit ~499,999 of those vertices. One way we achieve that is by rendering an image on a server and and sending it as a PNG: voila, it's a point. Sometimes, though, we want data sent to the browser where it can be rendered with SVG (or canvas, or webgl) so that it can be interactive. The problem: It turns out that, using modern geographic data sets, it's very easy to overload SVG's rendering abilities. In an effort to cope with those limitations, I'm trying to figure out how to visually losslessly reduce a data set for a given scale and map extent (and, if necessary, for a known map pixel width and height). I got a great reduction in data size just using the Douglas-Peucker algorithm, and I believe I was able to get it to keep the polygons true to within one pixel. Unfortunately, Douglas-Peucker doesn't preserve topology, so it changed how borders between polygons got rendered. I couldn't readily find other algorithms to try out and adapt to the purpose, but I don't have much CS/algorithm background and might not recognize them if I saw them.

    Read the article

  • SQL SERVER – Answer – Value of Identity Column after TRUNCATE command

    - by pinaldave
    Earlier I had one conversation with reader where I almost got a headache. I suggest all of you to read it before continuing this blog post SQL SERVER – Reseting Identity Values for All Tables. I believed that he faced this situation because he did not understand the difference between SQL SERVER – DELETE, TRUNCATE and RESEED Identity. I wrote a follow up blog post explaining the difference between them. I asked a small question in the second blog post and I received many interesting comments. Let us go over the question and its answer here one more time. Here is the scenario to set up the puzzle. Create Table with Seed Identity = 11 Insert Value and Check Seed (it will be 11) Reseed it to 1 Insert Value and Check Seed (it will be 2) TRUNCATE Table Insert Value and Check Seed (it will be 11) Let us see the T-SQL Script for the same. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Reseed to 1 DBCC CHECKIDENT ('TestTable', RESEED, 1) GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Truncate table TRUNCATE TABLE [TestTable] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Question for you Here -- Clean up DROP TABLE [TestTable] GO Now let us see the output of three of the select statements. 1) First Select after create table 2) Second Select after reseed table 3) Third Select after truncate table The reason is simple: If the table contains an identity column, the counter for that column is reset to the seed value defined for the column. Reference: Pinal Dave (http://blog.sqlauthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • New database profiling support in ANTS Performance Profiler

    - by Ben Emmett
    In May last year, the ANTS Performance Profiler team added the ability to profile database requests your application makes to SQL Server or Oracle. The really cool thing is that you’re shown those requests in the application’s call tree, so you can see what .NET code caused those queries to run. It’s particularly helpful if you’re using an ORM which automagically generates and runs queries for you, but which doesn’t necessarily do it in the most efficient way possible. Now by popular demand, we’ve added support for profiling MySQL (or MariaDB) and PostgreSQL, so you can see queries run against those databases too. Some of you have also said that you’re using the Devart dotConnect data providers instead of the native .NET ones, so we’ve added support for those drivers too. Hope it helps! For the record, here’s a list of supported connectors (ones in bold are new): SQL Server .NET Framework Data Provider Devart dotConnect for SQL Server Oracle .NET Framework Data Provider Oracle Data Provider for .NET Devart dotConnect for Oracle MySQL / MariaDB MySQL Connector/Net Devart dotConnect for MySQL PostgreSQL Npgsql .NET Data Provider for PostgreSQL Devart dotConnect for PostgreSQL SQL Server Compact Edition .NET Framework Data Provider for SQL Server Compact Edition Devart dotConnect for SQL Server Pro Have we missed a connector or database which you’d find useful? Tell us about it in the comments or by emailing [email protected]. Ben

    Read the article

  • Sync Google Contacts with QuickBooks

    - by dataintegration
    The RSSBus ADO.NET Providers offer an easy way to integrate with different data sources. In this article, we include a fully functional application that can be used to synchronize contacts between Google and QuickBooks. Like our QuickBooks ADO.NET Provider, the included application supports both the desktop versions of QuickBooks and QuickBooks Online Edition. Getting the Contacts Step 1: Google accounts include a number of contacts. To obtain a list of a user's Google Contacts, issue a query to the Contacts table. For example: SELECT * FROM Contacts. Step 2: QuickBooks stores contact information in multiple tables. Depending on your use case, you may want to synchronize your Google Contacts with QuickBooks Customers, Employees, Vendors, or a combination of the three. To get data from a specific table, issue a SELECT query to that table. For example: SELECT * FROM Customers Step 3: Retrieving all results from QuickBooks may take some time, depending on the size of your company file. To narrow your results, you may want to use a filter by including a WHERE clause in your query. For example: SELECT * FROM Customers WHERE (Name LIKE '%James%') AND IncludeJobs = 'FALSE' Synchronizing the Contacts Synchronizing the contacts is a simple process. Once the contacts from Google and the customers from QuickBooks are available, they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete contacts in either data source as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for QuickBooks V3, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the QuickBooks ADO.NET Data Provider V3, which can be obtained here.

    Read the article

  • SQLAuthority News – BI Quiz Question – How to Optimize Cube? – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here. The details of the quiz is as following: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here Here is a couple of hints what I am looking for in answer: How to reach to root of slow performance? Is hardware causing the problem or something else? Is slowness is due to how cube is build and its granularity? Is underlying tables require maintenance? Is there is chance to refractor the process? Are there any tool which can help diagnosis the slowness of the cube? It is not necessary to answer all the question – but something to start with. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using json as database with EF, how can I link EF and the json file during DbContext initialization?

    - by blacai
    For a personal testing-project I am considering to create a SPA with the following technologies: ASP.NET MVC + EF + WebAPI + AngularJS. The project will make use of small amount of data, so I was thinking I could use just a .json file as storage. But I am not sure about how to proceed with the link between EF and the json file in the initialization of the DbContext. I found a stackoverflow related question: http://stackoverflow.com/questions/13899342/can-we-use-json-as-a-database I know the basics of edit files and store data inside. What I tried is to get the data from the json file in the initilizer method and create the objects one by one. This is more a doubt about how this works if I save/update an object in the dbcontext, do I need to go through all the elements and add/update it manually? Is it better to rewrite the complete file? According to this http://stackoverflow.com/questions/7895335/append-data-to-a-json-file-with-php it is not a good practice to use json/XML for data wich will be manipulated. Anyone has experience with anything similar? Is this a really bad idea and I should use another kind of data-storage?

    Read the article

  • How do create an encrypted system with multiple Linux distributions?

    - by niels
    A few weeks ago I created a completely encrypted system on a notebook and must say I like the idea. It's a little bit annoying to enter the password on every boot, but it's nice to know even if I loose the computer I don't give my data to other people. With the alternate-cd it's easy to do. Now I have to setup a new system where I want to combine the new idea with my usual usage strategy. There I have more partitions: 3 system, Home, Different Data-Partitions for vm-data, photo-data and mp3-data. The background is that I prefer not to update a system. I prefer to install the new version parallel to the old system. So I can easily test it. Obviously the Data-Partitions are used for both systems. My questions is, how can I easily combine both my strategy and the crypto-approach? Or is it impossible. The way to do the crypted stuff by hand is in my eyes to complicated.

    Read the article

  • DB API for shell scripting (any shell)

    - by foampile
    I am faced with some legacy shell scripts that run batch data processing jobs in Oracle using SQL+. For the most part, the data tier does not have to communicate back to the script with retrieved data to be passed for shell-level processing but in a few cases it does. The problem is, SQL+ is really meant to be an end user app and not an API that can communicate with other clients programmaticaly. That is why people have invented APIs such as DBD::DBI for Perl, JDBC for Java, ODBC etc. The way it is done is they invoke SQL+ and then parse the output, which is clearly designed for human eye consumption, using tools like sed and awk. The whole thing is at best a hack and very prone to bugs. Since this client is rather conservative with their technology, they don't want to scale their scripts up to Perl or Python where there are data access APIs. So I am wondering whether there are similar APIs for shell, e.g. K or bash. What I would like is if an API would return data in a 2-dimensional array or strings (for the lack of type setting) so that I can just read DB data like that. The way they do it now is akin to parsing regular web page HTML to get a single stock quote rather than cleanly calling a web service and be done with it. Anybody know of a product I can use? Thanks

    Read the article

< Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >