Search Results

Search found 58499 results on 2340 pages for 'temporal data'.

Page 110/2340 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Export data to Excel from Silverlight/WPF DataGrid

    - by outcoldman
    Data export from DataGrid to Excel is very common task, and it can be solved with different ways, and chosen way depend on kind of app which you are design. If you are developing app for enterprise, and it will be installed on several computes, then you can to advance a claim (system requirements) with which your app will be work for client. Or customer will advance system requirements on which your app should work. In this case you can use COM for export (use infrastructure of Excel or OpenOffice). This approach will give you much more flexibility and give you possibility to use all features of Excel app. About this approach I’ll speak below. Other way – your app is for personal use, it can be installed on any home computer, in this case it is not good to ask user to install MS Office or OpenOffice just for using your app. In this way you can use foreign tools for export, or export to xml/html format which MS Office can read (this approach used by JIRA). But in this case will be more difficult to satisfy user tasks, like create document with landscape rotation and with defined fields for printing. At this article I'll show you how to work with Excel object from .NET 4 and Silverlight 4 with dynamic objects and give you an approach which allow you to export data from DataGrid Silverlight and WPF controls. Read more...

    Read the article

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Writing algorithm on 2D data set in plain english

    - by Alexandre P. Levasseur
    I have started an introductory Java class and the material is absolutely horrendous and I have to get excellent grades to be accepted into the master's degree, hence my very beginner question: In my assignment I have to write algorithms (no pseudo-code yet) to solve a board game (Sudoku). Essentially, the notes say that an algorithm is specification of the input(s), the output(s) and the treatments applied to the input to get the output. My question lies on the wording of algorithms because I could probably code it but I can't seem to put it on paper in a coherent way. The game has a 9x9 board and one of the algorithms to write has to find the solution by looking at 3 squares (either horizontal or vertical) and see if one of the three sub-squares match the number you are looking for. If none match then the number you are looking to place is in one of the other 2 set of 3 sub-squares (see image to get a better idea). I really can't get my head around how to formulate the solution into the terms described above or maybe it's just too simple, here's what I was thinking: Input: A 2-dimensional set of data of size 9 by 9 to be solved and a number to search for. Ouput: A 2-dimensional set of data of size 9 by 9 either solved or partially solved. Treatment: Scan each set of 3x9 and 9x3 squares. For each line or column of a 3x3 square check if the number matches a line (or column). If it does then move to the next line (or column). If not then proceed to the next 3x3 square in the same line (or column). Rinse and repeat. Does that make sense as an algorithm written in plain english ? I'm not looking for an answer to the algorithm per se but rather on the formulation of algorithms in plain english.

    Read the article

  • A KSH adattárháza: Oracle Essbase és Oracle Database alapon

    - by Fekete Zoltán
    A magyar Központi Statisztikai Hivatal metaadat vezérelt adattárháza három fontos Oracle terméken nyugszik. Az interneten elérhetok az adatok a KSH Tájékoztatási adatbázis-ból. Data from KSH in English. Amikor ezeket a sorokat írom, péntek éjjel 21:36-kor 81 online felhasználó kérdezte le az adatokat. :) - Oracle Essbase multidimenziós OLAP szerver, technikai infó - Hyperion Interactive Reporting lekérdezo eszköz, technikai infó - Oracle Database Enterprise Edition Az angol nyelvu customer snapshot, azaz ügyfél történet: Hungarian Central Statistical Office Provides 200,000 External Users with Secure Online Access to Data. A magyar nyelvu sikersztori: A KSH statisztikai adatainak 60 százaléka elérheto böngészo és platform függetlenül évi mintegy 200 000 internetes felhasználó számára. A termék kiválasztásában, a projekt kialakításában és bevezetésében nagy szerepet vállalt a DSS Consulting Kft. és az Oracle Konzultáció. A projekt során elért legfontosabb eredmények: - adattárház: 150-200 egyideju felhasználó, éves szinten 200 000 felhasználót jelent - Essbase memória alapú tárolási struktúrája: közel valósideju hozzáférés - A rendszer platform és böngészo független, ezért a felhasználók széles köre érheti el a statisztikai adatokat. - Natív Java API és XMLA támogatással egyedi karbantartó alkalmazás - A statisztikus munkatársak speciális informatikai eloképzettség nélkül építik fel és gondozzák a multidimenzionális adatbázisokat - Az Oracle Hyperion Interactive Reporting: oszlopos, kereszttáblás, szekcionált, grafikonos, webes lekérdezések Letöltheto a következo KSH eloadás a HOUG konferenciáról 2009-bol: Hyperalea iacta est - a KSH Essbase alapú adattárház rendszere. A most megjelent sikersztori: angolul és magyarul.

    Read the article

  • Manage a flexible and elastic Data Center with Oracle VM Manager (By Tarry Singh - PACKT Publishing)

    - by frederic.michiara
    For the ones looking at an easy reading and first good approach to Oracle VM Manager and VM Servers, I would recommend reading the following book even so it was written for 2.1.2 whereas we can use now Oracle VM 2.2 : Oracle VM Manager 2.1.2 Manage a Flexible and Elastic Data Center with Oracle VM Manager Learn quickly to install Oracle VM Manager and Oracle VM Servers Learn to manage your Virtual Data Center using Oracle VM Manager Import VMs from the Web, template, repositories, and other VM formats such as VMware Learn powerful Xen Hypervisor utilities such as xm, xentop, and virsh A practical hands-on book with step-by-step instructions Oracle VM experts might be frustrated, but to me it's not aim to Oracle VM experts, but to the ones who needs an introduction to the subject with a good coverage of all what you need to know. This book is available on https://www.packtpub.com/oracle-vm-manager-2-1-2/book Need to find out about Table of contents : https://www.packtpub.com/article/oracle-vm-manager-2-1-2-table-of-contents Discover a sample chapter : https://www.packtpub.com/sites/default/files/sample_chapters/7122-oracle-virtualization-sample-chapter-4-oracle-vm-management.pdf Read also articles from Tarry Singh on http://www.packtpub.com/ : Oracle VM Management : http://www.packtpub.com/article/oracle-vm-management-1 Extending Oracle VM Management : http://www.packtpub.com/article/oracle-vm-management-2 Hope you'll enjoy this book as a first approach to Oracle VM. For more information on Oracle VM : Oracle VM on n OTN : http://www.oracle.com/technology/products/vm/index.html Oracle VM Wiki : http://wiki.oracle.com/page/Oracle+VM Oracle VM on IBM System x : http://www-03.ibm.com/systems/x/solutions/infrastructure/erpcrm/oracle/virtualization.html

    Read the article

  • Increase Performance and Agility with Oracle’s New Data Center Fabric Solutions

    - by Cinzia Mascanzoni
    Join this Webcas on  Tues., December 11, 2012 10 a.m. PT / 1 p.m. ET and hear from S.K. Vinod, Senior Director of Product Management, Oracle Virtual Networking products. He’ll show you how the fast, simple, and agile architecture of Oracle Fabric Interconnect provides dynamic network and storage connectivity to thousands of servers. You will see how to use Oracle Software Defined Network (SDN) to connect any resource on the data center fabric quickly—without incurring downtime or requiring network reconfiguration. With Oracle Virtual Networking products, you can: Streamline your data center connectivity Reduce complexity by 70% Cut infrastructure expenses by up to 50% Increase application performance up to 30x Provision new services and reconfigure resources in minutes  Simplify deployments with wire-once infrastructure  During the Webcast, you’ll also have the opportunity to chat directly with Oracle experts. Visit OPN's Server & Storage Systems Knowledge Zones anytime to learn about partner engagement, training, resources, and replays of other webcasts to jump start business.  You can also email us your questions. Unable to attend live? Register anyway – we'll send you the on-demand link to the Webcast!

    Read the article

  • Cross Platform Data Access with Xamarin & C# For iPhone, iPad, and Android - Local, Web Services, & Sql Server

    - by Wallym
    The following is a link to cross platform data access training with Xamarin & C#.   It is intended for use on iPhone, iPad, and Android devices.  The course covers local data in Sqlite, calling Web Services via REST and JSON, and calling Sql Server. Url: http://www.learnnowonline.com/course/cpx2/xamarin-cross-platform-data-access/  Course Data  Applications live on data. These applications can vary from an online social network service, to a company’s internal database, to simple data, and all points in between. This Course will focus on how to easily access data on the device, communicate back and forth with a web service, and then finally to a SQL server database. Outline Local Data (27:36) Introduction (00:36) Problem (01:57) Solution (02:01) LINQ (02:03) LINQ Status (00:48) SQLite (02:18) SQLite - .Net Developers (00:50) SQLite-net (01:07) SQLite-net Attributes (02:10) Getting Started (01:09) CRUD (01:05) SQLite Platforms (01:17) Demo: SQLite – Android (04:53) Demo: SQLite – iOS (04:56) Summary (00:20) Web Services Data (32:43) Introduction (00:19) Async Commands (03:15) HttpClient (01:26) HTTP Verbs (01:29) Notes (00:58) GET Operation (01:37) JSON.NET (01:50) Images (01:16) Other Http Verbs (01:27) Post (03:18) Demo: Http – iOS prt1 (05:26) Demo: Http – iOS prt2 (05:28) Demo: Http – Android (04:20) Summary (00:27) Direct Data (12:33) Introduction (00:23) Remote Data - Direct (02:47) Sql Server (01:15) Demo: Sql Server – iOS (04:15) Demo: Sql Server – Android (01:49) "codepage 1252 not supported" (01:03) Other Resources (00:43) Summary (00:15) Note: Thanks to Frank Kreuger for his data access library Sqlite-Net.  It is very helpful and I have used it in some other projects beyond just this training session.

    Read the article

  • An XML file or Database?

    - by webnoob
    I am re-writing a section of my site and am trying to decide how much of a rewrite this will be. At the moment I have a web service feed that generates an xml once per day. I then use this xml file on my website to generate the general structure. I am trying to decide if this information should be located in the database or stay in the xml file. The file can range from 4mb - 12mb. The files depth can go on and on so I have to recurse to find the data I want. I use the .NET serializer classes and store the serialized file in a global variable to avoid re-serializing it each time the page is loaded. My reasons for thinking a database would be better are: I would know exactly where I am in the file by using an internal ID so I wouldn't have to recurse the file to get information. I wouldn't have to load / serialize the XML and could just use my already open database connections. Searching for the data in the file would be quicker(?) as I would just perform an SQL query rather than re-cursing the file. Has anyone got any ideas which is better and which option uses more resources on the server or be quicker? EDIT: The file is read every time the web page is loaded (although only serialized once). It isn't written to by standard users (only by an admin task that runs in the middle of the night). This is my initial investigation before mocking up.

    Read the article

  • Is It Possible To Recover A Partial LVM Logical Volume?

    - by Terry Wang
    Background It is an Ubuntu 12.04 VirtualBox VM with 5 virtual HDDs (VDI), NOTE this is just a test VM, so not well planned ahead: ubuntu.vdi for / (/dev/mapper/ubuntu-root AKA /dev/ubuntu/root) and /home (/dev/mapper/ubuntu-home) weblogic.vdi - /dev/sdb (mounted on /bea for weblogic and other stuff) btrfs1.vdi - /dev/sdc (part of btrfs -m raid1 -d raid1 configuration) btrfs2.vdi - /dev/sdd (part of btrfs -m raid1 -d raid1 configuration) more.vdi - /dev/sde (added this virtual HDD because / ran out of inodes and it wasn't easy to figure out what to delete so as to free up inodes, so I just added the new virtual HDD, created PV, added it to existing volume group ubuntu, grew the root logical volume to work around the inode issue -_-) What happened? Last Friday, before finishing up I wanted to free up some disk space on that box, for some reason I thought the more.vdi was useless and tried to detach it from the VM, I then clicked delete (should have clicked keep files damn!) by mistake when detaching. Unfortunately I didn't have backup for it. All too late. What I have tried Tried to undelete (use testdisk and photorec) the vdi files but it takes too long and recovered heaps of .vdi files that I didn't want (huge, filled the disk, damn!). I finally gave up. Fortunately most of data is on separate ext4 partition and btrfs volumes. Out of curiosity, I still tried to mount the logical volumes and see if it is possible to at least recover the /var and /etc I tried to use system rescue cd to boot and activate the volume groups, I got: Couldn't find device with uuid xxxx. Refusing activation of the partial LV root. Use --partial to override. 1 logical volume(s) in volume group "ubuntu" now active. I was able to mount home LV but not root LV. I am wondering if it is possible to access the root LV any more. Under the bonnet, data (on LV root - /) was striped to more.vdi (PV), I know it's almost impossible to to recover. But I am still curious about how system administrator/DevOps guys deal with this sort of situation;-) Thanks in advance.

    Read the article

  • Tools for modelling data and workflows using structured text files

    - by Alexey
    Consider a case when I want to try some idea of an application. But I want to avoid investing a lot of effort in coding UI/work flows/database schema etc before I see that it's going to be useful to me (as example of potential user). My idea is stay lightweight and put all the data in text files. So the components could be following: Domain objects are represented by text files or their fragments Domain objects are grouped by their type using directories Structure the files using some both human- and machine-friendly format, e.g. YAML Use some smart text editor (e.g. vim, emacs, rubymine) to edit and navigate those files Use color schemes and macros/custom commands of the text editor to effectively manipulate those files Use scripts (or a lightweight web framework like Sinatra) to try some business logic ideas on top of the data model The question is: Are there tools or toolkits that support or can be adopted to this approach? Also any ideas, links to articles/other knowledge sources are very welcome. And more specific question: What is the simplest way to index and update index of files with YAML files?

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • What is the Sarbanes-Oxley (SOX) Act?

    In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The SOX act targets the financial side of companies, but its impacts can be seen within the technology arena as well because it is their responsibility to store all of a company’s electronic records regardless of file type. This act specifies that all records and electronic messages must be saved for no less than five years according to SearchCIO. In addition, consequences for non-compliance are fines, imprisonment, or both. Sarbanes-Oxley Act: Rules that affect the management of Electronic records according to SearchCIO. Allowed practices regarding destruction, alteration, or falsification of records. Retention period for records storage. Best practices indicate that corporations securely store all business records using the same guidelines set for public accountants. Types of business records that need to be stored Business Records  Business Communications Including Electronic Communications References: SOXLaw: The Sarbanes-Oxley Act 2002 Retrieved May 2011 from http://www.soxlaw.com/ SearchCIO: What is Sarbanes-Oxley Act (SOX)? Retrieved May 2011 from http://searchcio.techtarget.com/definition/Sarbanes-Oxley-Act

    Read the article

  • Microsoft MVP Award &ndash; Data Platform Development

    - by Dane Morgridge
    For those who don't already know, yesterday I received my first Microsoft MVP Award in Data Platform Development.  With less than 5,000 MVPs in the world overall and about 20 in the Data Platform category, saying I am honored would be an understatement.  From the first time I spoke at a code camp, I was totally hooked and have had a blast travelling around the east coast speaking at code camps and users groups.  I'd like to take the time to thank Dani Diaz (@danidiaz) for the nomination and everyone who supported me, especially my wife Lisa for letting me travel and speak as much as I have and putting up with me for late nights and such.  Roska Digital, my employer, also deserves a shout out for supporting me and giving me the necessary time off to get to speaking engagements.  With any luck, the next year will be at least as fun if not more than the last one has.  I hope to see you at a code camp or user group meeting soon! I would also like to send a congratulations to the other new Philly Area MVPs: John Angelini (@johnangelini) & Ned Ames (@nedames) You can find out more about the Microsoft MVP Award at https://mvp.support.microsoft.com/

    Read the article

  • Data transfer between"main" site and secured virtual subsite

    - by Emma Burrows
    I am currently working on a C# ASP.Net 3.5 website I wrote some years ago which consists of a "main" public site, and a sub-site which is our customer management application, using forms-based authentication. The sub-site is set up as a virtual folder in IIS and though it's a subfolder of "main", it functions as a separate web app which handles CRUD access to our customer database and is only accessible by our staff. The main site currently includes a form for new leads to fill in, which generates an email to our sales staff so they can contact them and convince them to become customers. If that process is successful, the staff manually enter the information from the email into the database. Not surprisingly, I now have a new requirement to feed the data from the new lead form directly into the database so staff can just check a box for instance to turn the lead into a customer. My question therefore is how to go about doing this? Possible options I've thought of: Move the new lead form into the customer database subsite (with authentication turned off). Add database handling code to the main site. (No, not seriously considering this duplication of effort! :) Design some mechanism (via REST?) so a webpage outside the customer database subsite can feed data into the customer database I'd welcome some suggestions on how to organise the code for this situation, preferably with extensibility in mind, and particularly if there are any options I haven't thought of. Thanks in advance.

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Data Networks Visualized via Light Paintings [Video]

    - by ETC
    All around you are wireless data networks: cellular networks, Wi-Fi networks, a world of wireless communication. Check out this awesome video of network signals mapped over a cityscape. What would happen if you made a device that allowed you to map signal strength onto film? In the following video electronics tinkerers craft an LED meter and use it to paint onto long exposure photographs with phenomenal results. Immaterials: light painting Wi-Fi [via Make] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper Data Networks Visualized via Light Paintings [Video]

    Read the article

  • Which tools should be used for data migration between environments?

    - by Paula Speranza-Hadley
    Ø  With the Oracle Utilities Application Framework based products there are a number of tools provided that can be used to transfer data from one environment to another. Ø  There are three main tools that implementations use: §  ConfigLab - A configurable copy facility is metadata aware and therefore understands the relationships between objects and by invoking the relevant maintenance objects validates the data copied. This utility uses the object validation to help ensure data integrity. Basically it is a set of configuration tables and a set of batch jobs to perform the migration of data. §  Bundling - A configurable release management tool that allows exporting of Advanced Configuration Environment based objects (business services, business objects, UI Maps etc) from one environment to another. §  Blueprint - An Oracle Utilities Software Development Kit (SDK) based tool to import metadata from the development environment to your initial testing environment. The utility is command line based and basically uses a text based configuration file to drive the utility on the source and target sides. Ø  Each tool has a role in an implementation but you must be careful to use the right tool for the right job within an implementation. The suggestions are as follows: §  Only use the Blueprint tool for migrating data from your development platform to your initial test environment. The blueprint tool is not designed to move large amounts of data and certainly is risky, if not used correctly, and can potentially break the integrity of your data. §  The SDK provides the configuration data that it is used for (mainly meta-data). This should not be extended as, while it can perform data migration on any data, it is not efficient and risky for certain types of configuration data. Ø  Additional information can be found in the following whitepaper:  Oracle Utilities Application Framework - Release Management - Software Configuration Management on MyOracle.com

    Read the article

  • save data in the database to xml in c [closed]

    - by Jayanth N
    I have some data in the database. I want those data in database to be stored as an xml file. I'm using postgresql 9.1 for database, for xml processing I'm using libxml (http://xmlsoft.org/). I'm writing the code in C language. Please help me. Detailed explanation: I have a client, which sends me a xml file. Server receives the xml file, parses the xml file and stores it in the db. From db i want to send the details in the form of an xml to the client. client: <employee> <name>glen</name> <telephone>123456789</telephone> </employee> <employee> <name>gwen</name> <telephone>123456789</telephone> </employee> server parses this xml file as displayed below: name : glen telephone:123456789 name : gwen telephone: 123456789 and saves it in a database(postgresql9.1) if the client requests for details of the employees, i've to send it in xml form from database.I don't know how to do it can u help me out.

    Read the article

  • Term for unit testing that separates test logic from test result data

    - by mario
    So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results.

    Read the article

  • storing data for maps database

    - by Timigen
    I am working on an application that displays choropleth maps. These maps are of all different types, some display state by county, country by state/province, or world by country. How should I handle storing the map information in the database? My Thoughts: I won't need to do queries to find POI inside a region, so I don't think there is a need to use spatial datatypes. I am considering storing a map as a geoJSON object (I am using JS mapping library that accepts geoJSON). The only issue is what if I want a map of the US northeast. Then I would have geoJSON for the US and a separate one for the US northeast, which would be redundant. Would it make sense to have a shape database where I had each state then when I needed a map of the US I could query for each state, and when I needed a map of the US Northeast I could again query for what I need? Note: I am not concerned with storing the data for each region, just the region itself. I will query for the data on the fly for the specific region.

    Read the article

  • Read On Phone Pushes Data from Your Desktop to the Appropriate Android App

    - by ETC
    Read On Phone is a free Android application that intelligently pushes data to your phone from your bowser. Rather than simply opening the URL on your phone, it opens the appropriate application for the task and formats text. Most send-to-phone type tools simply take the URL of the web page you’re looking at on your computer and shuttle it to your phone. Read On Phone is a more active and effective tool. When you send a page that is text, it formats the text for easy reading on your phone. When you send a YouTube video, map, or telephone number, it opens up the appropriate tool on your phone such as your YouTube viewer, Google Maps, or your phone dialer. In addition to that handy functionality Read On Phone also includes adjustments for day and night reading, font size, auto-scrolling, and pagination. Read On Phone is available as both a Chrome extension and as a bookmarklet for cross-browser use. Hit up the link below for additional information. Read On Phone Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Read On Phone Pushes Data from Your Desktop to the Appropriate Android App MetroTwit is a Sleek Native Twitter Client for Your Windows System Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices

    Read the article

  • Examples of limitations in IT due to different bit length by design

    - by Alaudo
    I am teaching the course "Introduction in Programming" for the first-year students and would like to find interesting examples where the datatype size in bits, chosen by design, led to certain known restrictions or important values. Here are some examples: Due to the fact that the Bell teleprinter used 7-bit-code (later accepted as ASCII) until now have we often to encode attachments in electronic messages to contain only 7 bit data. Classical limitation of 32-bit address space leads to the 4Gb maximal RAM size available for 32-bit systems and 4Gb maximal file size in FAT32. Do you know some other interesting examples how the choice of the data type (and especially its binary length) influenced the modern IT world. Added after some discussion in comments: I am not going to teach how to overcome limitations. I just want them to know that 1 byte can hold the values from -127..0..+127 o 0..255, 2 bytes cover the range 0..65535 etc by proving examples they know from other sources, like the above-mentioned base64 encoding etc. We are just learning the basic datatypes and I am trying to find a good reference for "how large" these types are.

    Read the article

  • Upgrading/Installing Demantra 7.3.1.1? Check this out!

    - by user702295
    Here is a summary for relase 7.3.1.1 install/upgrade/features Data Preservation Setting for General Levels  Deploying Demantra Application Server 10g  Important upgrade Information  Known upgrade issues  Mozilla Firefox Browser  Installer Issues  Reviewing / Simulating General Level Data Such as CTO Base Model Demand  Failure Rate Calculation  Demantra SSL Client Authentication and Java 6  CTO functionality does not work in release 7.3.1.1 after upgrading from 7.3.0 using the ‘Platform Upgrade Only’ option.  User Privileges and Export Worksheet to Excell  Cookie Attribute Causes Logging Issue in Worksheet  List of bugs fixed in 7.3.1.1 See the following for details. Demantra 7.3.1.1 Install / Upgrade Known Issues, Notes, Guidance, Defects, Workarounds (Doc ID 1370518.1) Related Documents For Demantra Version 7.3.1.1 And If Demantra Supports The Required Stacks (Doc ID 1367141.1)

    Read the article

  • HTML Maps for DataViz

    - by jamiet
    I don’t talk about data visualisation (#dataviz) much on this blog because, well, because its not something I can claim to be particularly knowledgeable about – that doesn’t stop me from having an opinion about it though. I just stumbled upon an article that compares the media ecosystems of four various technology companies entitled Mapping The Entertainment Ecosystems of Apple, Microsoft, Google & Amazon and apart from being a well-researched and well-written article (not something that the tech press excels in, in my opinion) I was struck by how well the author uses maps to tell a story to the reader. Take the map in this screenshot: Clicking on one of the four icons at the bottom of the map dynamically changes the shaded areas of the map to indicate which countries that company offers their services. Its aesthetically pleasing but moreover it instantly coveys useful information to me. What I love about it most of all though is that its all pure HTML – I don’t need any poxy Silverlight or Flash plugin to view the maps and interact with them – all I need is an up to date web browser (I use Chrome v22 although I tested using IE9 and it worked fine there too). This is how I believe data visualisation should be - conveying useful info in a friction-free way. Maps are a great way of achieving that – I just wish more people agreed with me about calendars as a mechanism for doing the same! @Jamiet

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >