The heart of a small business is its data. Lose it, and you're in big trouble. But what about your laptop? We look at data security -- from software tweaks to hardware encryption -- to lock down your laptop.
The heart of a small business is its data. Lose it, and you're in big trouble. But what about your laptop? We look at data security -- from software tweaks to hardware encryption -- to lock down your laptop.
I am using assimp to load 3d model data. I have noticed that each loaded model is made up of different meshes. I was wondering should each mesh have it's own vertex/index buffer or should there just be one for the whole model? From looking through the index data that is loaded it seems to suggest that I will need a vertex buffer per mesh but I'm not 100% sure.
I am using C++ and DirectX9
Thank you,
Mark
IT security organizations are spending too much on data protection for compliance but not enough on securing key trade secrets, the real crown jewels of corporate data.
In response to my answer yesterday about rotating an Image, Jamund told me to use .data() instead of .attr()
First I thought that he is right, but then I thought about a bigger context... Is it always better to use .data() instead of .attr()? I looked in some other posts like what-is-better-data-or-attr or jquery-data-vs-attrdata
The answers were not satisfactory for me...
So I moved on and edited the example by adding CSS. I thought it might be useful to make a different Style on each image if it rotates. My style was the following:
.rp[data-rotate="0"] {
border:10px solid #FF0000;
}
.rp[data-rotate="90"] {
border:10px solid #00FF00;
}
.rp[data-rotate="180"] {
border:10px solid #0000FF;
}
.rp[data-rotate="270"] {
border:10px solid #00FF00;
}
Because design and coding are often separated, it could be a nice feature to handle this in CSS instead of adding this functionality into JavaScript. Also in my case the data-rotate is like a special state which the image currently has. So in my opinion it make sense to represent it within the DOM.
I also thought this could be a case where it is much better to save with .attr() then with .data(). Never mentioned before in one of the posts I read.
But then i thought about performance. Which function is faster? I built my own test following:
<!DOCTYPE HTML>
<html>
<head>
<title>test</title>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
<script type="text/javascript">
function runfirst(dobj,dname){
console.log("runfirst "+dname);
console.time(dname+"-attr");
for(i=0;i<10000;i++){
dobj.attr("data-test","a"+i);
}
console.timeEnd(dname+"-attr");
console.time(dname+"-data");
for(i=0;i<10000;i++){
dobj.data("data-test","a"+i);
}
console.timeEnd(dname+"-data");
}
function runlast(dobj,dname){
console.log("runlast "+dname);
console.time(dname+"-data");
for(i=0;i<10000;i++){
dobj.data("data-test","a"+i);
}
console.timeEnd(dname+"-data");
console.time(dname+"-attr");
for(i=0;i<10000;i++){
dobj.attr("data-test","a"+i);
}
console.timeEnd(dname+"-attr");
}
$().ready(function() {
runfirst($("#rp4"),"#rp4");
runfirst($("#rp3"),"#rp3");
runlast($("#rp2"),"#rp2");
runlast($("#rp1"),"#rp1");
});
</script>
</head>
<body>
<div id="rp1">Testdiv 1</div>
<div id="rp2" data-test="1">Testdiv 2</div>
<div id="rp3">Testdiv 3</div>
<div id="rp4" data-test="1">Testdiv 4</div>
</body>
</html>
It should also show if there is a difference with a predefined data-test or not.
One result was this:
runfirst #rp4
#rp4-attr: 515ms
#rp4-data: 268ms
runfirst #rp3
#rp3-attr: 505ms
#rp3-data: 264ms
runlast #rp2
#rp2-data: 260ms
#rp2-attr: 521ms
runlast #rp1
#rp1-data: 284ms
#rp1-attr: 525ms
So the .attr() function did always need more time than the .data() function. This is an argument for .data() I thought. Because performance is always an argument!
Then I wanted to post my results here with some questions, and in the act of writing I compared with the questions Stack Overflow showed me (similar titles)
And true enough, there was one interesting post about performance
I read it and run their example. And now I am confused! This test showed that .data() is slower then .attr() !?!! Why is that so?
First I thought it is because of a different jQuery library so I edited it and saved the new one. But the result wasn't changing...
So now my questions to you:
Why are there some differences in the performance in these two examples?
Would you prefer to use data- HTML5 attributes instead of data, if it represents a state? Although it wouldn't be needed at the time of coding? Why - Why not?
Now depending on the performance:
Would performance be an argument for you using .attr() instead of data, if it shows that .attr() is better? Although data is meant to be used for .data()?
UPDATE 1:
I did see that without overhead .data() is much faster. Misinterpreted the data :) But I'm more interested in my second question. :)
Would you prefer to use data- HTML5 attributes instead of data, if it
represents a state? Although it wouldn't be needed at the time of
coding? Why - Why not?
Are there some other reasons you can think of, to use .attr() and not .data()? e.g. interoperability? because .data() is jquery style and HTML Attributes can be read by all...
UPDATE 2:
As we see from T.J Crowder's speed test in his answer attr is much faster then data! which is again confusing me :) But please! Performance is an argument, but not the highest! So give answers to my other questions please too!
Is it possible to use ASP.NET Dynamic Data with SubSonic 3 in-place of Linq to SQL classes or the Entity Framework? MetaModel.RegisterContext() throws an exception if you use the context class that SubSonic generates. I thought I remembered coming across a SubSonic/Dynamic Data example back before SubSonic 3 was released but I can't find it now. Has anyone been able to get this to work?
Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)
Oracle's Master Data Management suite has seen remarkable development progress in the past year and a half. Leveraging out-of-the-box integration to applications provided by Application Integration Architecture, the cost, risk and time it takes to implement an MDM solution has been cut in half. Oracle Applications are now 'MDM Aware', Data Quality tools have reached state-of-the-art status, and new hubs are coming on line. In this AppsCast, Pascal Laik, VP MDM Products discusses this progress, what it means for Oracle customers, and where we are going from here.
I am looking for some well known algorithms that can be considered while handling very large amount of data.(Edit- By large amount of data I refer to records in a database excluding blobs). These algorithms if not in totality but in parts may be used in big web applications like Twitter, Last.fm , Amazon ,etc.
Specifically, I'm looking for names or links to such algorithms. My primary interest lies in developing a very deep understanding on working with large database records and writing efficient code for working with the same.
In my last question i asked how to best send a string from one view controller to another, both which were on a navigation stack:
http://stackoverflow.com/questions/2898860/pass-string-from-tableviewcontroller-to-viewcontroller-in-navigation-stack
However I just realised I can either pass the path to the file in the app's document's folder as the first (the table view) has already accessed the data in the file should I pass viewcontroller the data to the pushed VC?
Choosing a long term data storage medium isn';t as easy as you may think. You might imagine that the data could be burnt to CD, locked in a cupboard and that it would last forever however unfortunatel... [Author: Chris Holgate - Computers and Internet - April 02, 2010]
Think you would know what to do if the zombie apocalypse happened today? This awesome flow-chart from Game Informer can help you make the right decisions based on your location or time period! Note: Click on the image embedded in the post to view and download the full-size (2500*1763 pixels) wallpaper. Will You Survive The Undead Apocalypse? [Game Informer] HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?
I'm looking for an implementation to the Minimum Cost Flow graph problem in OCaml.
OCaml library ocamlgraph has Goldberg algorithm implementation.
The paper called Efficient implementation of the Goldberg-Tarjan minimum-cost flow algorithm is noting that Goldberg-Tarjan algorithm can find minimum cost graph. Question is, does ocamlgraph algorithm also find the minimum cost? Library documentation only states, that it's suitable at least for the maximum flow problem.
If not, does anybody have a good link to a nice any minimum cost optimization algorithm code? I will manually translate it into OCaml then. Forgive me, if I missed it on Wikipedia: there are too many algos on flow networks for the first day!
I have a data.frame:
df<-data.frame(a=c("x","x","y","y"),b=c(1,2,3,4))
> df
a b
1 x 1
2 x 2
3 y 3
4 y 4
What's the easiest way to print out each pair of values as a list of strings like this:
"x1", "x2", "y1", "y2"
I read a lot about Flow theory and its applications to video games, and I got an idea sticking in my mind. If a number of weight values are applied to different parameters of a certain game level (i.e. the size of the level, the number of enemies, their overal strength, the variance in their behavior, etc), then it should be technically possible to find an overal score mechanism for each level in the game. If a constant ratio of complexity increase were empirically defined, for instance 1,3333, or say, the Golden Ratio, would it be a good idea to arrange the levels in such an order that the increase of overal complexity tends to increase that much? Has somebody tried it?
The popular PTS (Platform Technology Services) technical trainings for partners now include a workshop on Big Data.
First workshop will take place in Milan on July 10-12. (You can register by clicking the link below)
Oracle Big Data
Technical
Workshop
July
10-12,
2012: Cinisello Balsamo, Milan, Italy
For more info contact [email protected]
My client wants to standardize address information for existing and future addresses collected for their customers, particularly the street suffixes. The application used to enter and collect address information has the street suffix separated from the address field, but it is a textbox instead of a drop down list therefore things are not standardized. I know there are some options out there to standarize data, but they would like a less expensive alternative. Are there any functions in SQL Server that I can use to standardized data?
A corrupt table in MS Access means lost time and data. It can lead to a loss of revenue or even employment. Learn how you might be able to recover most of the data when the worst happens.
States are passing more and more data security laws, the US Senate and the House have bills meandering through Congress, securing personal information and encrypting that data is no longer optional.
Among the big news this week in green data center management: Singapore pledges to cut carbon emissions by 16 percent by 2020, and for the second year in a row, data security is the primary concern and driver of IT asset disposition compliance efforts.
As Energy Star certification for data centers moves closer to becoming a reality, the EPA has also begun crafting specification for certifying data center storage devices.
When create an Oracle Database on the Amazon cloud you will need to store you database files somewhere on the EC2 cloud. There are basically three places where database files can be stored:
1. Local drive - This is the local drive that is part of the virtual server EC2 instance.
2. Elastic Block Storage (EBS) - Network attached storage that appears as a local drive.
3. Simple Storage Server (S3) - 'Storage for the Internet'.
S3 is not high speed and intended for store static document type files. S3 can also be used for storing static web page files. Local drives are ephemeral so not appropriate to be used as a database storage device. The leaves EBS which is the best place to store database files. EBS volumes appear as local disk drives. They are actually network-attached to an Amazon EC2 instance. In addition, EBS persists independently from the running life of a single Amazon EC2 instance.
If you use an EBS backed instance for your database data, it will remain available after reboot but not after terminate. In many cases you would not need to terminate your instance but only stop it, which is equivalent of shutdown. In order to save your database data before you terminate an instance, you can snapshot the EBS to S3.
Using EBS as a data store you can move your Oracle data files from one instance to another. This allows you to move your database from one region or or zone to another. Unfortunately, to scale out your Oracle RDS on AWS you can not have read only replicas. This is only possible with the other Oracle relational database - MySQL.
The free micro instances use EBS as its storage.
This is a very good white paper that has more details:
AWS Storage Options
This white paper also discusses: SQS, SimpleDB, and Amazon RDS in the context of storage devices. However, these are not storage devices you would use to store an Oracle database.
This slide deck discusses a lot of information that is in the white paper:
AWS Storage Options slideshow
With Master Data Services, IT organizations can centrally manage critical data assets companywide.
Too many SQL Servers to keep up with?Download a free trial of SQL Response to monitor your SQL Servers in just one intuitive interface."The monitoringin SQL Response is excellent." Mike Towery.