Search Results

Search found 58779 results on 2352 pages for 'cleaned data'.

Page 73/2352 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How should I structure my database to gain maximum efficiently in this scenario?

    - by Bob Jansen
    I'm developing a PHP script that analyzes the web traffic of my clients websites. By placing a link to a javascript on the clients website (think of Google Analyses), my script harvests information like: the visitors IP address, reference link, current page link, user agent, etc. Now my clients can view these statistics via a control panel that I have build. These clients can also adjust profile settings, set firewall rules, create support tickets and pay invoices. Currently all the the traffic is stored in one table. You can imagine that this tabel would become very large as some my clients receive thousands of pageviews per day. Furthermore, all the traffic data of each client would be stored in the same table, creating a mess. This is the same for the firewall rules currently, and the invoice and support system. I'm looking for way to structure my database in a more organized way to hold large amounts of data of multiple users. This is the first project that I'm developing that deals with so much data, and would like to hear suggestions and tips. I was thinking of using multiple databases to structure the data. The main database will store users data (email,pass,id,etc) admin/website settings. Than each client will have an unique database labeled prefix_userid, which carry tables holding their traffic, invoice, and support ticket data. Would this be a solution, and would it slow down or speed up overall performances (that is spreading the data over muliple databases). I have a solid VPS, but would like to safe and be as effient as possible.

    Read the article

  • A Cost Effective Solution to Securing Retail Data

    - by MichaelM-Oracle
    By Mike Wion, Director, Security Solutions, Oracle Consulting Services As so many noticed last holiday season, data breaches, especially those at major retailers, are now a significant risk that requires advance preparation. The need to secure data at all access points is now driven by an expanding privacy and regulatory environment coupled with an increasingly dangerous world of hackers, insider threats, organized crime, and other groups intent on stealing valuable data. This newly released Oracle whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c outlines a powerful story related to a defense in depth, multi-layered, security model that includes preventive, detective, and administrative controls for data security. At Oracle Consulting Services (OCS), we help to alleviate the fears of massive data breach by providing expert services to assist our clients with the planning and deployment of Oracle’s Database Security solutions. With our deep expertise in Oracle Database Security, Oracle Consulting can help clients protect data with the security solutions they need to succeed with architecture/planning, implementation, and expert services; which, in turn, provide faster adoption and return on investment with Oracle solutions. On June 10th at 10:00AM PST , Larry Ellison will present an exclusive webcast entitled “The Future of Database Begins Soon”. In this webcast, Larry will launch the highly anticipated Oracle Database In-Memory technology that will make it possible to perform true real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine real-time analytics available across your existing Oracle applications! Click here to download the whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c.

    Read the article

  • Displaying a Paged Grid of Data in ASP.NET MVC

    This article demonstrates how to display a paged grid of data in an ASP.NET MVC application and builds upon the work done in two earlier articles: Displaying a Grid of Data in ASP.NET MVC and Sorting a Grid of Data in ASP.NET MVC. Displaying a Grid of Data in ASP.NET MVC started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model). Sorting a Grid of Data in ASP.NET MVC enhanced the application by adding a view-specific Model (ProductGridModel) that provided the View with the sorted collection of products to display along with sort-related information, such as the name of the database column the products were sorted by and whether the products were sorted in ascending or descending order. The Sorting a Grid of Data in ASP.NET MVC article also walked through creating a partial view to render the grid's header row so that each column header was a link that, when clicked, sorted the grid by that column. In this article we enhance the view-specific Model (ProductGridModel) to include paging-related information to include the current page being viewed, how many records to show per page, and how many total records are being paged through. Next, we create an action in the Controller that efficiently retrieves the appropriate subset of records to display and then complete the exercise by building a View that displays the subset of records and includes a paging interface that allows the user to step to the next or previous page, or to jump to a particular page number, we create and use a partial view that displays a numeric paging interface Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • filtering dates in a data view webpart when using webservices datasource

    - by Patrick Olurotimi Ige
    I was working on a data view web part recently and i had  to filter the data based on dates.Since the data source was web services i couldn't use  the Offset which i blogged about earlier.When using web services to pull data in sharepoint designer you would have to use xpath.So for example this is the soap that populates the rows<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row/>But you would need to add some predicate [] and filter the date nodes.So you can do something like this (marked in red)<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row[ddwrt:FormatDateTime(string(@ows_Created),1033,'yyyyMMdd') &gt;= ddwrt:FormatDateTime(string(substring-after($fd,'#')),1033,'yyyyMMdd')]"/>For the filtering to work you need to have the date formatted  above as yyyyMMdd.One more thing you must have noticed is the $fd variable.This variable is created by me creating a calculated column in the list so something like this [Created]-2So basically that the xpath is doing is get me data only when the Created date  is greater than or equal to the Created date -2 which is 2 date less than the created date.Also not that when using web services in sharepoint designer and try to use the default filtering you won't get to see greater tha or less than in the option list comparison.:(Hope this helps.

    Read the article

  • Methods for getting static data from obj-c to Parse (database)

    - by Phil
    I'm starting out thinking out how best to code my latest game on iOS, and I want to use parse.com to store pretty much everything, so I can easily change things. What I'm not sure about is how to get static data into parse, or at least the best method. I've read about using NSMutableDictionary, pLists, JSON, XML files etc. Let's say for example in AS3 I could create a simple data object like so... static var playerData:Object = {position:{startX:200, startY:200}}; Stick it in a static class, and bingo I've got my static data to use how I see fit. Basically I'm looking for the best method to do the same thing in Obj-c, but the data is not to be stored directly in the iOS game, but to be sent to parse.com to be stored on their database there. The game (at least the distribution version) will only load data from the parse database, so however I'm getting the static data into parse I want to be able to remove it from being included in the eventual iOS file. So any ideas on the best methods to sort that? If I had longer on this project, it might be nice to use storyboards and create a simple game editor to send data to parse....actually maybe that's a good idea, it's just I'm new to obj-c and I'm looking for the most straightforward (see quickest) way to achieve things. Thanks for any advice.

    Read the article

  • Do you have a data roadmap?

    - by BuckWoody
    I often visit companies where they asked me “What is SQL Server’s Roadmap?” What they mean is that they want to know where Microsoft is going with our database products. I explain that we’re expanding not only the capacities in SQL Server but the capabilities – we’re trying to make an “information platform”, rather than just a data store. But it’s interesting when I ask the same question back. “What is your data roadmap?” Most folks are surprised by the question, thinking only about storage and archival. To them, data is data. Ah, not so. Your data is one of the most valuable, if not the most valuable asset in your organization. And you should be thinking about how you’ll acquire it, how it will be distributed, how you’ll archive it (which includes more than just backing it up) and most importantly, how you’ll leverage it. Because it’s only when data becomes information that it is truly useful. to be sure, the folks on the web that collect lots of data have a strategy for it – do you? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Extending QuickBooks Reporting with the QuickBooks ADO.NET Data Provider

    - by dataintegration
    The ADO.NET Provider for QuickBooks comes with several reports you may request from QuickBooks by default. However, there are many more that are not readily available. The ADO.NET Provider for QuickBooks makes it easy for you to create new reports and customize existing ones. In this article, we will illustrate how to create your own report and retrieve it from the Server Explorer in Visual Studio. For this example we will show how to create an Item Profitability Report. Creating the report script file Step 1: Download the sample reports available here. Extract them to a folder of your choice. Step 2: Make a copy of the ReportGeneralSummary.rsd file and rename it to ItemProfitability.rsd. Then open the file in any text editor. Step 3: Open the installation directory of the ADO.NET Provider for QuickBooks. Under the \db\ folder, locate the ReportJob.rsb file. Open this file in another text editor. Note: Although we are using ReportJob.rsb for this example, other reports may be contained in other Report*.rsb files. We recommend consulting the included help file and first locating the Report stored procedure and ReportType you are looking for. Otherwise, you may open each Report*.rsb file and look under the "reporttype" input for the report you are attempting to create. Step 4: First, let's rename the title of ItemProfitability.rsd. Near the top of the file you will see a title and description. Change the title to match the name of the file. Change the description to anything you like. For example: <rsb:info title="ItemProfitability" description="Executes my custom report."> Just below the Title, there are a number of columns. The Id represents the row number. The RowType represents the type of data returned by QuickBooks. The ColumnValue* columns represent all of the column data returned by QuickBooks. In some instances, we may need to add additional ColumnValue columns. Step 5: To add additional ColumnValue columns, simply copy the last column, paste it directly below, and continue increasing the numerical value at end of the attribute name. For example: <attr name="ColumnValue9" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue10" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue11" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue12" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> ... Caution: Do not rename the ColumnValue* definitions themselves. They are generalized so that we can understand each type of report returned by QuickBooks. Renaming them to something other than ColumnValue* will cause your columns to return with null values. Step 6: Now let's update the available inputs for the table. From the ReportJob.rsb file, copy all of the input elements into ItemProfitability under the "Psuedo-Column definitions" comment. You will be replacing the existing input elements in ItemProfitability with inputs from ReportJob. When you are done, it should look like this: <!-- Psuedo-Column definitions --> <input name="reporttype" description="The type of the report." value="ITEMESTIMATESVSACTUALS,ITEMPROFITABILITY,JOBESTIMATESVSACTUALSDETAIL,JOBESTIMATESVSACTUALSSUMMARY,JOBPROFITABILITYDETAIL,JOBPROFITABILITYSUMMARY," default="ITEMESTIMATESVSACTUALS" /> <input name="reportperiod" description="Report date range in the format (fromdate:todate), and either value may be omitted for an open ended range (e.g. 2009-12-25:). Supported date format: yyyy-MM-dd." /> <input name="reportdaterangemacro" description="Use a predefined date range." value="ALL,TODAY,THISWEEK,THISWEEKTODATE,THISMONTH,THISMONTHTODATE,THISQUARTER,THISQUARTERTODATE,THISYEAR,THISYEARTODATE,YESTERDAY,LASTWEEK,LASTWEEKTODATE,LASTMONTH,LASTMONTHTODATE,LASTQUARTER,LASTQUARTERTODATE,LASTYEAR,LASTYEARTODATE,NEXTWEEK,NEXTFOURWEEKS,NEXTMONTH,NEXTQUARTER,NEXTYEAR," default="ALL" /> ... Step 7: Now let's update the operationname attribute. This needs to match the same operationname used by ReportJob. After you have copied the correct value from ReportJob.rsb, the operationname in ItemProfitability should look like so: <rsb:set attr="operationname" value="qbReportJob"/> Step 8: There is one more thing we can do to make this a true Item Profitability report. We can remove the reporttype input and hardcode the value. To do this, copy and paste the rsb:set used for operationname. Then rename the attr and value to match the name and value you want to use. For example: <rsb:set attr="operationname" value="qbReportJob"/> <rsb:set attr="reporttype" value="ITEMPROFITABILITY"/> After this you can remove the input for reporttype. Now that you have your own report file, we can move on to displaying the report in the Visual Studio server explorer. Accessing the report through the Data Provider Step 1: Open Visual Studio. In the Server Explorer, configure a new connection with the QuickBooks Data Provider. Step 2: For the Location connection string property, enter the directory where the new report has been saved to. Step 3: The new report should appear as a new view in the Server Explorer. Let's retrieve data from it. Step 4: You can specify any inputs in the WHERE clause. New Report Example Script To help you get started using this new QuickBooks Data Provider report, you will need to download the QuickBooks ADO.NET Data Provider and the fully functional sample script.

    Read the article

  • Customer Perspectives: Oracle Data Integrator

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE The Data Integration Product Management team will be hosting a customer panel session dedicated to Oracle Data Integrator at Oracle OpenWorld. I will have the pleasure to present this session with three of our customers: Paychex, Ross Stores and Turkcell. In this session, you will hear how Paychex, Ross Stores and Turkcell utilize Oracle Data Integrator to meet their IT and business needs. Our customers will be able to share with you how they use ODI in their environments, best practices, lessons learned and benefits of implementing Oracle Data Integrator. If you're interested in hearing more about how our customers use Oracle Data Integrator then I recommend attending this session: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Customer Perspectives: Oracle Data Integrator Wednesday October, 3rd, 1:15PM - 2:15PM Marriott Marquis – Golden Gate C3 v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} The Data Integration track at OpenWorld covers variety of topics and speakers. In addition to product management of Oracle GoldenGate, Oracle Data Integrator, and Enteprise Data Quality presenting product updates and roadmap, we have several customer panels and stand-alone sessions featuring select customers such as St. Jude Medical, Raymond James, Aderas, Turkcell, Paychex, Comcast, Ticketmaster, Bank of America and more. You can see an overview of Data Integration sessions here.  Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} If you are not able to attend OpenWorld, please check out our latest resources for Data Integration and Oracle GoldenGate. In the coming weeks you will see more blogs about our products’ new capabilities and what to expect at OpenWorld. We hope to see you at OpenWorld and stay in touch via our future blogs. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Why .data() function of jQuery is better to prevent memory leaks?

    - by burak ozdogan
    Hi, Regarding to jQuery utility function jQuery.data() the online documentation says: "The jQuery.data() method allows us to attach data of any type to DOM elements in a way that is safe from circular references and therefore from memory leaks. " Why to use: document.body.foo = 52; can result a memory leak -or in what conditions- so that I should use jQuery.data(document.body, 'foo', 52); Should I ALWAYS prefer .data() instead of using expandos in any case? (I would appreciate if you can provide an example to compare the differences) Thanks, burak ozdogan

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Using mcrypt to pass data across a webservice is failing

    - by adam
    Hi I'm writing an error handler script which encrypts the error data (file, line, error, message etc) and passes the serialized array as a POST variable (using curl) to a script which then logs the error in a central db. I've tested my encrypt/decrypt functions in a single file and the data is encrypted and decrypted fine: define('KEY', 'abc'); define('CYPHER', 'blowfish'); define('MODE', 'cfb'); function encrypt($data) { $td = mcrypt_module_open(CYPHER, '', MODE, ''); $iv = mcrypt_create_iv(mcrypt_enc_get_iv_size($td), MCRYPT_RAND); mcrypt_generic_init($td, KEY, $iv); $crypttext = mcrypt_generic($td, $data); mcrypt_generic_deinit($td); return $iv.$crypttext; } function decrypt($data) { $td = mcrypt_module_open(CYPHER, '', MODE, ''); $ivsize = mcrypt_enc_get_iv_size($td); $iv = substr($data, 0, $ivsize); $data = substr($data, $ivsize); if ($iv) { mcrypt_generic_init($td, KEY, $iv); $data = mdecrypt_generic($td, $data); } return $data; } echo "<pre>"; $data = md5(''); echo "Data: $data\n"; $e = encrypt($data); echo "Encrypted: $e\n"; $d = decrypt($e); echo "Decrypted: $d\n"; Output: Data: d41d8cd98f00b204e9800998ecf8427e Encrypted: ê÷#¯KžViiÖŠŒÆÜ,ÑFÕUW£´Œt?†÷>c×åóéè+„N Decrypted: d41d8cd98f00b204e9800998ecf8427e The problem is, when I put the encrypt function in my transmit file (tx.php) and the decrypt in my recieve file (rx.php), the data is not fully decrypted (both files have the same set of constants for key, cypher and mode). Data before passing: a:4:{s:3:"err";i:1024;s:3:"msg";s:4:"Oops";s:4:"file";s:46:"/Applications/MAMP/htdocs/projects/txrx/tx.php";s:4:"line";i:80;} Data decrypted: Mª4:{s:3:"err";i:1024@7OYªç`^;g";s:4:"Oops";s:4:"file";sôÔ8F•Ópplications/MAMP/htdocs/projects/txrx/tx.php";s:4:"line";i:80;} Note the random characters in the middle. My curl is fairly simple: $ch = curl_init($url); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, 'data=' . $data); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $output = curl_exec($ch); Things I suspect could be causing this: Encoding of the curl request Something to do with mcrypt padding missing bytes I've been staring at it too long and have missed something really really obvious If I turn off the crypt functions (so the transfer tx-rx is unencrypted) the data is received fine. Any and all help much appreciated! Thanks, Adam

    Read the article

  • Synchronize Data between a Silverlight ListBox and a User Control

    - by psheriff
    One of the great things about XAML is the powerful data-binding capabilities. If you load up a list box with a collection of objects, you can display detail data about each object without writing any C# or VB.NET code. Take a look at Figure 1 that shows a collection of Product objects in a list box. When you click on a list box you bind the current Product object selected in the list box to a set of controls in a user control with just a very simple Binding statement in XAML.  Figure 1: Synchronizing a ListBox to a User Control is easy with Data Binding Product and Products Classes To illustrate this data binding feature I am going to just create some local data instead of using a WCF service. The code below shows a Product class that has three properties, namely, ProductId, ProductName and Price. This class also has a constructor that takes 3 parameters and allows us to set the 3 properties in an instance of our Product class. C#public class Product{  public Product(int productId, string productName, decimal price)  {    ProductId = productId;    ProductName = productName;    Price = price;  }   public int ProductId { get; set; }  public string ProductName { get; set; }  public decimal Price { get; set; }} VBPublic Class Product  Public Sub New(ByVal _productId As Integer, _                 ByVal _productName As String, _                 ByVal _price As Decimal)    ProductId = _productId    ProductName = _productName    Price = _price  End Sub   Private mProductId As Integer  Private mProductName As String  Private mPrice As Decimal   Public Property ProductId() As Integer    Get      Return mProductId    End Get    Set(ByVal value As Integer)      mProductId = value    End Set  End Property   Public Property ProductName() As String    Get      Return mProductName    End Get    Set(ByVal value As String)      mProductName = value    End Set  End Property   Public Property Price() As Decimal    Get      Return mPrice    End Get    Set(ByVal value As Decimal)      mPrice = value    End Set  End PropertyEnd Class To fill up a list box you need a collection class of Product objects. The code below creates a generic collection class of Product objects. In the constructor of the Products class I have hard-coded five product objects and added them to the collection. In a real-world application you would get your data through a call to service to fill the list box, but for simplicity and just to illustrate the data binding, I am going to just hard code the data. C#public class Products : List<Product>{  public Products()  {    this.Add(new Product(1, "Microsoft VS.NET 2008", 1000));    this.Add(new Product(2, "Microsoft VS.NET 2010", 1000));    this.Add(new Product(3, "Microsoft Silverlight 4", 1000));    this.Add(new Product(4, "Fundamentals of N-Tier eBook", 20));    this.Add(new Product(5, "ASP.NET Security eBook", 20));  }} VBPublic Class Products  Inherits List(Of Product)   Public Sub New()    Me.Add(New Product(1, "Microsoft VS.NET 2008", 1000))    Me.Add(New Product(2, "Microsoft VS.NET 2010", 1000))    Me.Add(New Product(3, "Microsoft Silverlight 4", 1000))    Me.Add(New Product(4, "Fundamentals of N-Tier eBook", 20))    Me.Add(New Product(5, "ASP.NET Security eBook", 20))  End SubEnd Class The Product Detail User Control Below is a user control (named ucProduct) that is used to display the product detail information seen in the bottom portion of Figure 1. This is very basic XAML that just creates a text block and a text box control for each of the three properties in the Product class. Notice the {Binding Path=[PropertyName]} on each of the text box controls. This means that if the DataContext property of this user control is set to an instance of a Product class, then the data in the properties of that Product object will be displayed in each of the text boxes. <UserControl x:Class="SL_SyncListBoxAndUserControl_CS.ucProduct"  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"  HorizontalAlignment="Left"  VerticalAlignment="Top">  <Grid Margin="4">    <Grid.RowDefinitions>      <RowDefinition Height="Auto" />      <RowDefinition Height="Auto" />      <RowDefinition Height="Auto" />    </Grid.RowDefinitions>    <Grid.ColumnDefinitions>      <ColumnDefinition MinWidth="120" />      <ColumnDefinition />    </Grid.ColumnDefinitions>    <TextBlock Grid.Row="0"               Grid.Column="0"               Text="Product Id" />    <TextBox Grid.Row="0"             Grid.Column="1"             Text="{Binding Path=ProductId}" />    <TextBlock Grid.Row="1"               Grid.Column="0"               Text="Product Name" />    <TextBox Grid.Row="1"             Grid.Column="1"             Text="{Binding Path=ProductName}" />    <TextBlock Grid.Row="2"               Grid.Column="0"               Text="Price" />    <TextBox Grid.Row="2"             Grid.Column="1"             Text="{Binding Path=Price}" />  </Grid></UserControl> Synchronize ListBox with User Control You are now ready to fill the list box with the collection class of Product objects and then bind the SelectedItem of the list box to the Product detail user control. The XAML below is the complete code for Figure 1. <UserControl x:Class="SL_SyncListBoxAndUserControl_CS.MainPage"  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"  xmlns:src="clr-namespace:SL_SyncListBoxAndUserControl_CS"  VerticalAlignment="Top"  HorizontalAlignment="Left">  <UserControl.Resources>    <src:Products x:Key="productCollection" />  </UserControl.Resources>  <Grid x:Name="LayoutRoot"        Margin="4"        Background="White">    <Grid.RowDefinitions>      <RowDefinition Height="Auto" />      <RowDefinition Height="*" />    </Grid.RowDefinitions>    <ListBox x:Name="lstData"             Grid.Row="0"             BorderBrush="Black"             BorderThickness="1"             ItemsSource="{Binding                   Source={StaticResource productCollection}}"             DisplayMemberPath="ProductName" />    <src:ucProduct x:Name="prodDetail"                   Grid.Row="1"                   DataContext="{Binding ElementName=lstData,                                          Path=SelectedItem}" />  </Grid></UserControl> The first step to making this happen is to reference the Silverlight project (SL_SyncListBoxAndUserControl_CS) where the Product and Products classes are located. I added this namespace and assigned it a namespace prefix of “src” as shown in the line below: xmlns:src="clr-namespace:SL_SyncListBoxAndUserControl_CS" Next, to use the data from an instance of the Products collection, you create a UserControl.Resources section in the XAML and add a tag that creates an instance of the Products class and assigns it a key of “productCollection”.   <UserControl.Resources>    <src:Products x:Key="productCollection" />  </UserControl.Resources> Next, you bind the list box to this productCollection object using the ItemsSource property. You bind the ItemsSource of the list box to the static resource named productCollection. You can then set the DisplayMemberPath attribute of the list box to any property of the Product class that you want. In the XAML below I used the ProductName property. <ListBox x:Name="lstData"         ItemsSource="{Binding             Source={StaticResource productCollection}}"         DisplayMemberPath="ProductName" /> You now need to create an instance of the ucProduct user contol below the list box. You do this by once again referencing the “src” namespace and typing in the name of the user control. You then set the DataContext property on this user control to a binding. The binding uses the ElementName attribute to bind to the list box name, in this case “lstData”. The Path of the data is SelectedItem. These two attributes together tell Silverlight to bind the DataContext to the selected item of the list box. That selected item is a Product object. So, once this is bound, the bindings on each text box in the user control are updated and display the current product information. <src:ucProduct x:Name="prodDetail"               DataContext="{Binding ElementName=lstData,                                      Path=SelectedItem}" /> Summary Once you understand the basics of data binding in XAML, you eliminate a lot code that is otherwise needed to move data into controls and out of controls back into an object. Connecting two controls together is easy by just binding using the ElementName and Path properties of the Binding markup extension. Another good tip out of this blog is use user controls and set the DataContext of the user control to have all of the data on the user control update through the bindings. NOTE: You can download the complete sample code (in both VB and C#) at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "SL – Synchronize List Box Data with User Control" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".

    Read the article

  • getting database connectivity issue from only one part of my ASP.net project?

    - by Greg
    Hi, Something weird has started happening to my project with Dynamic Data. Suddenly I am now getting connection errors when going to the DD navigation pages, but things work fine on the default DD main page, and also other MVC pages I've added in myself work fine to the database. Any ideas? So in summary if I go to the following web pages: custom URL for my custom controller - this works fine, including getting database data main DD page from root URL - this works fine click on a link to a table maintenance page from the main DD page - GET DATABASE CONNECTIVITY ERROR Some items: * I'm using SQL Server Express 2008 * Doesn't seem to be any debug/error info in VS2010 at all I can see * Web config entry: <add name="Model1Container" connectionString="metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=GREG\SQLEXPRESS_2008;Initial Catalog=greg_development;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient"/> Error: Server Error in '/' Application. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Source Error: Line 39: DropDownList1.Items.Add(new ListItem("[Not Set]", NullValueString)); Line 40: } Line 41: PopulateListControl(DropDownList1); Line 42: // Set the initial value if there is one Line 43: string initialValue = DefaultValue; Source File: U:\My Dropbox\source\ToplogyLibrary\Topology_Web_Dynamic\DynamicData\Filters\ForeignKey.ascx.cs Line: 41 Stack Trace: [SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +5009598 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() +234 System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity) +341 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, TimeoutTimer timeout, SqlConnection owningObject) +129 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, TimeoutTimer timeout) +239 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, TimeoutTimer timeout, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +195 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +232 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +185 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +33 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +524 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +66 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +479 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +108 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +126 System.Data.SqlClient.SqlConnection.Open() +125 System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) +52 [EntityException: The underlying provider failed on Open.] System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) +161 System.Data.EntityClient.EntityConnection.Open() +98 System.Data.Objects.ObjectContext.EnsureConnection() +81 System.Data.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption) +46 System.Data.Objects.ObjectQuery`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator() +44 System.Data.Objects.ObjectQuery`1.GetEnumeratorInternal() +36 System.Data.Objects.ObjectQuery.System.Collections.IEnumerable.GetEnumerator() +10 System.Web.DynamicData.Misc.FillListItemCollection(IMetaTable table, ListItemCollection listItemCollection) +50 System.Web.DynamicData.QueryableFilterUserControl.PopulateListControl(ListControl listControl) +85 Topology_Web_Dynamic.ForeignKeyFilter.Page_Init(Object sender, EventArgs e) in U:\My Dropbox\source\ToplogyLibrary\Topology_Web_Dynamic\DynamicData\Filters\ForeignKey.ascx.cs:41 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnInit(EventArgs e) +91 System.Web.UI.UserControl.OnInit(EventArgs e) +83 System.Web.UI.Control.InitRecursive(Control namingContainer) +140 System.Web.UI.Control.AddedControl(Control control, Int32 index) +197 System.Web.UI.ControlCollection.Add(Control child) +79 System.Web.DynamicData.DynamicFilter.EnsureInit(IQueryableDataSource dataSource) +200 System.Web.DynamicData.QueryableFilterRepeater.<Page_InitComplete>b__1(DynamicFilter f) +11 System.Collections.Generic.List`1.ForEach(Action`1 action) +145 System.Web.DynamicData.QueryableFilterRepeater.Page_InitComplete(Object sender, EventArgs e) +607 System.EventHandler.Invoke(Object sender, EventArgs e) +0 System.Web.UI.Page.OnInitComplete(EventArgs e) +8871862 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +604 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Spatial data in the UK

    - by simonsabin
    I am just loving the fact that the Ordance Survey has now released a huge amount of data that can be used freely. I’ve downloaded the Panorama (tm) data http://www.ordnancesurvey.co.uk/oswebsite/products/land-form-panorama-contours/index.html . which is all the contours for the UK This I’ve loaded into SQL Server using Safe Computing’s FME ( http://www.safe.com/ ). This is because the data is a Autocad DXF file and translating that to SQL Server spatial data is not easy. The FME workbench is not...(read more)

    Read the article

  • This Week on the Green Data Center Management Front

    Among the big news this week in green data center management: Horizon Data Center Solutions announces a partnership with Power Loft, Cisco, GDSYS, and Incheon Metropolitan City partner to build a next-generation green data center, and Vycon Energy demonstrates how its flywheel backup power systems can be used for health-care data system power backup.

    Read the article

  • This Week's News From the Green Data Center Management Front

    Among this week's developments in green data center management: myriad federal and state tax credits are now available for Green IT projects related to data centers; the EPA is finalizing its Energy Star program for data centers; and Numara Software has a way to help you better green your data center.

    Read the article

  • Real-time Big Data Analytics is a reality for StubHub with Oracle Advanced Analytics

    - by Mark Hornick
    What can you use for a comprehensive platform for real-time analytics? How can you process big data volumes for near-real-time recommendations and dramatically reduce fraud? Learn in this video what Stubhub achieved with Oracle R Enterprise from the Oracle Advanced Analytics option to Oracle Database, and read more on their story here. Advanced analytics solutions that impact the bottom line of a business are challenging due to the range of skills and individuals involved in realizing such solutions. While we hear a lot about the role of the data scientist, that role is but one piece of the puzzle. Advanced analytics solutions also have an operationalization aspect that also requires close proximity to where the transactional activity occurs. The data scientist needs access to the right data with which to model the business problem. This involves IT for data collection, management, and administration, as well as ensuring zero downtime (a website needs to be up 24x7). This also involves working with the data scientist to keep predictive models refreshed with the latest scripts. Integrating advanced analytics solutions into enterprise apps involves not just generating predictions, but supporting the whole life-cycle from data collection, to model building, model assessment, and then outcome assessment and feedback to the model building process again. Application and web interface designers need to take into account how end users will see and use the advanced analytics results, e.g., supporting operations staff that need to handle the potentially fraudulent transactions. As just described, advanced analytics projects can be "complicated" from just a human perspective. The extent to which software can simplify the interactions among users and systems will increase the likelihood of project success. The ability to quickly operationalize advanced analytics projects and demonstrate measurable value, means the difference between a successful project and just a nice research report. By standardizing on Oracle Database and SQL invocation of R, along with in-database modeling as found in Oracle Advanced Analytics, expedient model deployment and zero downtime for refreshing models becomes a reality. Meanwhile, data scientists are also able to explore leading edge techniques available in open source. The Oracle solution propels the entire organization forward to realize the value of advanced analytics.

    Read the article

  • The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume

    - by Jason Fitzpatrick
    Last week we showed you how to set up a simple, but strongly encrypted, TrueCrypt volume to help you protect your sensitive data. This week we’re digging in deeper and showing you how to hide your encrypted data within your encrypted data. The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • Inserting data to database from Android

    - by Angel
    I have to build an application where the requirement is that my clients will send data from their Android device and I have to save that data to a database. I have done the part of coding that inserts data from Android emulator to my XAMPP database on localhost, now I have to implement the real thing. How can I connect the devices where my application will be installed to the XAMPP database I have created so that the data they send can be inserted into it?

    Read the article

  • Attunity Oracle CDC Solution for SSIS - Beta

    We in no way work for Attunity but we were asked to test drive a beta version of their Oracle CDC solution for SSIS.  Everybody should know that moving more data than you need to takes too much time and uses resources that may better be employed doing something else.  Change data Capture is a technology that is designed to help you identify only the data that has had something done to it and you can therefore move only what is needed.  Microsoft have implemented this exact functionality into SQL server 2008 and I really like it there.  Attunity though are doing it on Oracle. DISCLAIMER: This is a BETA release and some of the parts are a bit ugly/difficult to work with.  The idea though is definitely right and the product once working does exactly what it says on the tin.  They have always been helpful to me when I have had a problem with the product and if that continues then beta testing pain should be eased somewhat. In due course I am going to be doing some videos around me using the product.  If you use Oracle and SSIS then give it a go. Here is their product description.   Attunity is a Microsoft SQL Server technology partner and the creator of the Microsoft Connectors for Oracle and Teradata, currently available in SQL Server 2008 Enterprise Edition. Attunity released a beta version of the Attunity Oracle-CDC for SSIS, a product that integrates continually changing Oracle data into SSIS, efficiently and in real-time. Attunity designed the product and integrated it into SSIS to create the simple creation of change data capture (CDC) solutions, accelerate implementation time, and reduce resources and costs. They also utilize log-based CDC so the solution has minimal impact on the Oracle source system. You can use the product to implement enterprise-class data replication, synchronization, and real-time business intelligence (BI) and data warehousing projects, quickly and efficiently, leveraging their existing SQL Server investments and resource skills. Attunity architected the product specifically for the Microsoft SSIS developer community and the product is available for both SQL Server 2005 and SQL Server 2008. It offers the following key capabilities: · Log-based, non-intrusive Oracle CDC · Full integration into SSIS and the Business Intelligence Developer Studio · Automatic generation of SSIS packages for CDC as well as full-loads of Oracle data · Filtering of Oracle tables and columns at the source · Monitoring and control of CDC processing Click to learn more and download the beta.

    Read the article

  • Empty $_POST data

    - by Antimony
    I am trying to post a post to my MyBB server from a Python script, but try as I might, I can't get it to work. The request shows up in the forensic log and the headers are in the $_SERVER variable, but $_POST is always an empty array. The error log shows nothing, even at the debug level. I've already tried searching, but I haven't found anything that's helped. I already checked the post_max_size thing, which is 8M. Another factor is that it's just my own requests which aren't going through. Browser generated requests seem to do just fine. I've looked and looked, but I can't find anything I'm doing differently that should matter. Anyway, here is an example request. POST /newreply.php?tid=1&processed=1 HTTP/1.1 Host: <redacted> Accept-Encoding: identity Content-Length: 1153 Content-Type: multipart/form-data; boundary=-->0xa216654L Cookie: sid=<redacted>; mybb[lastvisit]=1354995469; mybb[lastactive]=1354995500; mybb[threadread]=a%3A1%3A%7Bi%3A1%3Bi%3A1354995469%3B%7D; mybb[forumread]=a%3A1%3A%7Bi%3A2%3Bi%3A1354995469%3B%7D; loginattempts=1; mybbuser=2_ZlVVfaYS9FstZGQzr4KiNRUm3Z4xAgJkTPPq2ouFcuaragOTVQ Accept: text/html User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:14.0) Gecko/20100101 Firefox/14.0.1 -->0xa216654L Content-Disposition: form-data; name="my_post_key" 257b2bbef4334000d9088169154900a3 -->0xa216654L Content-Disposition: form-data; name="quoted_ids" -->0xa216654L Content-Disposition: form-data; name="tid" 1 -->0xa216654L Content-Disposition: form-data; name="message" foo!2 -->0xa216654L Content-Disposition: form-data; name="attachmentact" -->0xa216654L Content-Disposition: form-data; name="attachmentaid" -->0xa216654L Content-Disposition: form-data; name="icon" -1 -->0xa216654L Content-Disposition: form-data; name="posthash" e93a2c78ce3f6807a86fd475ef4178cf -->0xa216654L Content-Disposition: form-data; name="postoptions[subscriptionmethod]" -->0xa216654L Content-Disposition: form-data; name="replyto" -->0xa216654L Content-Disposition: form-data; name="message_new" foo!2 -->0xa216654L Content-Disposition: form-data; name="submit" Post Reply -->0xa216654L Content-Disposition: form-data; name="attachment"; filename="" Content-Type: application/octet-stream -->0xa216654L Content-Disposition: form-data; name="action" do_newreply -->0xa216654L Content-Disposition: form-data; name="subject" Lol -->0xa216654L

    Read the article

  • Data-driven animation states

    - by user8363
    I'm trying to handle animations in a 2D game engine hobby project, without hard-coding them. Hard coding animation states seems like a common but very strange phenomenon, to me. A little background: I'm working with an entity system where components are bags of data and subsystems act upon them. I chose to use a polling system to update animation states. With animation states I mean: "walking_left", "running_left", "walking_right", "shooting", ... My idea to handle animations was to design it as a data driven model. Data could be stored in an xml file, a rdbms, ... And could be loaded at the start of a game / level/ ... This way you can easily edit animations and transitions without having to go change the code everywhere in your game. As an example I made an xml draft of the data definitions I had in mind. One very important piece of data would simply be the description of an animation. An animation would have a unique id (a descriptive name). It would hold a reference id to an image (the sprite sheet it uses, because different animations may use different sprite sheets). The frames per second to run the animation on. The "replay" here defines if an animation should be run once or infinitely. Then I defined a list of rectangles as frames. <animation id='WIZARD_WALK_LEFT'> <image id='WIZARD_WALKING' /> <fps>50</fps> <replay>true</replay> <frames> <rectangle> <x>0</x> <y>0</y> <width>45</width> <height>45</height> </rectangle> <rectangle> <x>45</x> <y>0</y> <width>45</width> <height>45</height> </rectangle> </frames> </animation> Animation data would be loaded and held in an animation resource pool and referenced by game entities that are using it. It would be treated as a resource like an image, a sound, a texture, ... The second piece of data to define would be a state machine to handle animation states and transitions. This defines each state a game entity can be in, which states it can transition to and what triggers that state change. This state machine would differ from entity to entity. Because a bird might have states "walking" and "flying" while a human would only have the state "walking". However it could be shared by different entities because multiple humans will probably have the same states (especially when you define some common NPCs like monsters, etc). Additionally an orc might have the same states as a human. Just to demonstrate that this state definition might be shared but only by a select group of game entities. <state id='IDLE'> <event trigger='LEFT_DOWN' goto='MOVING_LEFT' /> <event trigger='RIGHT_DOWN' goto='MOVING_RIGHT' /> </state> <state id='MOVING_LEFT'> <event trigger='LEFT_UP' goto='IDLE' /> <event trigger='RIGHT_DOWN' goto='MOVING_RIGHT' /> </state> <state id='MOVING_RIGHT'> <event trigger='RIGHT_UP' goto='IDLE' /> <event trigger='LEFT_DOWN' goto='MOVING_LEFT' /> </state> These states can be handled by a polling system. Each game tick it grabs the current state of a game entity and checks all triggers. If a condition is met it changes the entity's state to the "goto" state. The last part I was struggling with was how to bind animation data and animation states to an entity. The most logical approach seemed to me to add a pointer to the state machine data an entity uses and to define for each state in that machine what animation it uses. Here is an xml example how I would define the animation behavior and graphical representation of some common entities in a game, by addressing animation state and animation data id. Note that both "wizard" and "orc" have the same animation states but a different animation. Also, a different animation could mean a different sprite sheet, or even a different sequence of animations (an animation could be longer or shorter). <entity name="wizard"> <state id="IDLE" animation="WIZARD_IDLE" /> <state id="MOVING_LEFT" animation="WIZARD_WALK_LEFT" /> </entity> <entity name="orc"> <state id="IDLE" animation="ORC_IDLE" /> <state id="MOVING_LEFT" animation="ORC_WALK_LEFT" /> </entity> When the entity is being created it would add a list of states with state machine data and an animation data reference. In the future I would use the entity system to build whole entities by defining components in a similar xml format. -- This is what I have come up with after some research. However I had some trouble getting my head around it, so I was hoping op some feedback. Is there something here what doesn't make sense, or is there a better way to handle these things? I grasped the idea of iterating through frames but I'm having trouble to take it a step further and this is my attempt to do that.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >