Search Results

Search found 6110 results on 245 pages for 'graph databases'.

Page 59/245 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • How to bundle extension methods requiring configuration in a library

    - by Greg
    Hi, I would like to develop a library that I can re-use to add various methods involved in navigating/searching through a graph (nodes/relationships, or if you like vertexs/edges). The generic requirements would be: There are existing classes in the main project that already implement the equivalent of the graph class (which contains the lists of nodes / relationships), node class and relationship class (which links nodes together) - the main project likely already has persistence mechanisms for the info (e.g. these classes might be built using Entity Framework for persistance) Methods would need to be added to each of these 3 classes: (a) graph class - methods like "search all nodes", (b) node class - methods such as "find all children to depth i", c) relationship class - methods like "return relationship type", "get parent node", "get child node". I assume there would be a need to inform the library with the extending methods the class names for the graph/node/relationships table (as different project might use different names). To some extent it would need to be like how a generics collection works (where you pass the classes to the collection so it knows what they are). Need to be a way to inform the library of which node property to use for equality checks perhaps (e.g. if it were a graph of webpages the equality field to use might be the URI path) I'm assuming that using abstract base classes wouldn't really work as this would tie usage down to have to use the same persistence approach, and same class names etc. Whereas really I want to be able to, for a project that has "graph-like" characteristics, the ability to add graph searching/walking methods to it.

    Read the article

  • How to move point on arc?

    - by bbZ
    I am writing an app that is simulating RobotArm movement. What I want to achieve, and where I have couple of problems is moving a Point on arc (180degree) that is arm range of movement. I am moving an arm by grabbing with mouse end of arm (Elbow, the Point I was talking about), robot can have multiple arms with diffrent arm lenghts. If u can help me with this part, I'd be grateful. This is what I have so far, drawing function: public void draw(Graphics graph) { Graphics2D g2d = (Graphics2D) graph.create(); graph.setColor(color); graph.fillOval(location.x - 4, location.y - 4, point.width, point.height); //Draws elbow if (parentLocation != null) { graph.setColor(Color.black); graph.drawLine(location.x, location.y, parentLocation.x, parentLocation.y); //Connects to parent if (arc == null) { angle = new Double(90 - getAngle(parentInitLocation)); arc = new Arc2D.Double(parentLocation.x - (parentDistance * 2 / 2), parentLocation.y - (parentDistance * 2 / 2), parentDistance * 2, parentDistance * 2, 90 - getAngle(parentInitLocation), 180, Arc2D.OPEN); //Draws an arc if not drawed yet. } else if (angle != null) //if parent is moved, angle is moved also { arc = new Arc2D.Double(parentLocation.x - (parentDistance * 2 / 2), parentLocation.y - (parentDistance * 2 / 2), parentDistance * 2, parentDistance * 2, angle, 180, Arc2D.OPEN); } g2d.draw(arc); } if (spacePanel.getElbows().size() > level + 1) {//updates all childElbows position updateChild(graph); } } I just do not know how to prevent moving Point moving outside of arc line. It can not be inside or outside arc, just on it. Here I wanted to put a screenshot, sadly I don't have enough rep. Am I allowed to put link to this? Maybe you got other ways how to achieve this kind of thing. Here is the image: Red circle is actual state, and green one is what I want to do. EDIT2: As requested, repo link: https://github.com/dspoko/RobotArm

    Read the article

  • How to populate RRD database with CPU and MEM usage data?

    - by Tomaszs
    I have a Lighttpd server (on Centos) and would like to display 4 graphs: lighttpd traffic, lighttpd requests per second, CPU usage and MEM usage. I've set place for rrd database for lighttpd config like this: rrdtool.binary = "/usr/bin/rrdtool" rrdtool.db-name = "/var/www/lighttpd.rrd" And put into my WWW cgi-bin sh file that gets data from lighttpd RRD file and creates graphs of traffic and requests per second like this: #!/bin/sh RRDTOOL=/usr/bin/rrdtool OUTDIR=//var/www/graphs INFILE=/var/www/lighttpd.rrd OUTPRE=lighttpd-traffic WIDTH=400 HEIGHT=100 DISP="-v bytes --title TrafficWebserver \ DEF:binraw=$INFILE:InOctets:AVERAGE \ DEF:binmaxraw=$INFILE:InOctets:MAX \ DEF:binminraw=$INFILE:InOctets:MIN \ DEF:bout=$INFILE:OutOctets:AVERAGE \ DEF:boutmax=$INFILE:OutOctets:MAX \ DEF:boutmin=$INFILE:OutOctets:MIN \ CDEF:bin=binraw,-1,* \ CDEF:binmax=binmaxraw,-1,* \ CDEF:binmin=binminraw,-1,* \ CDEF:binminmax=binmaxraw,binminraw,- \ CDEF:boutminmax=boutmax,boutmin,- \ AREA:binmin#ffffff: \ STACK:binmax#f00000: \ LINE1:binmin#a0a0a0: \ LINE1:binmax#a0a0a0: \ LINE2:bin#efb71d:incoming \ GPRINT:bin:MIN:%.2lf \ GPRINT:bin:AVERAGE:%.2lf \ GPRINT:bin:MAX:%.2lf \ AREA:boutmin#ffffff: \ STACK:boutminmax#00f000: \ LINE1:boutmin#a0a0a0: \ LINE1:boutmax#a0a0a0: \ LINE2:bout#a0a735:outgoing \ GPRINT:bout:MIN:%.2lf \ GPRINT:bout:AVERAGE:%.2lf \ GPRINT:bout:MAX:%.2lf \ " $RRDTOOL graph $OUTDIR/$OUTPRE-hour.png -a PNG --start -14400 $DISP -w $WIDTH -h $HEIGHT $RRDTOOL graph $OUTDIR/$OUTPRE-day.png -a PNG --start -86400 $DISP -w $WIDTH -h $HEIGHT $RRDTOOL graph $OUTDIR/$OUTPRE-month.png -a PNG --start -2592000 $DISP -w $WIDTH -h $HEIGHT OUTPRE=lighttpd-requests DISP="-v req --title RequestsperSecond -u 1 \ DEF:req=$INFILE:Requests:AVERAGE \ DEF:reqmax=$INFILE:Requests:MAX \ DEF:reqmin=$INFILE:Requests:MIN \ CDEF:reqminmax=reqmax,reqmin,- \ AREA:reqmin#ffffff: \ STACK:reqminmax#00f000: \ LINE1:reqmin#a0a0a0: \ LINE1:reqmax#a0a0a0: \ LINE2:req#00a735:requests" $RRDTOOL graph $OUTDIR/$OUTPRE-hour.png -a PNG --start -14400 $DISP -w $WIDTH -h $HEIGHT $RRDTOOL graph $OUTDIR/$OUTPRE-day.png -a PNG --start -86400 $DISP -w $WIDTH -h $HEIGHT $RRDTOOL graph $OUTDIR/$OUTPRE-month.png -a PNG --start -2592000 $DISP -w $WIDTH -h $HEIGHT Basically it's not my script, i get it from somewhere from the internet. Now i would like to do the same for CPU usage and MEM usage. I don't like to use any additional packages! As you can see lighttpd populates lighttpd.rrd file with traffic data and requests per second. Now i would like to the system to populate second rrd file with CPU and MEM usage, so i can add to sh file code to generate graphs for this data. How can I populate RRD file with CPU and MEM usage data? Please, NO THIRD-PARTY tools !

    Read the article

  • Kindly guide me to buy a new laptop [on hold]

    - by Its me 007
    I am from India. I want to buy a new laptop. Shortlisted few but confused between which processor,Chip set and Graphics will be the best suited for my requirements. NOTE: NOT ABLE TO POST THE LINKS YOU WILL HAVE TO COPY PASTE IT. SORRY. 1) HP Pavilion 15-N004TX - 4th Gen CI5 - 4200U/4GB RAM/500 GB HDD/ 1GB Radeon Graphic - Rs 39990 www.homeshop18.com/hp-pavilion-15-n004tx-laptop-4th-gen-intel-core-i5-4200u-4gb-500gb-15-6-linux-silver-black/computers-tablets/laptops/product:30989197/cid:16317/ 2) Lenovo Essential G510 (59-398452) - 4th Gen Ci5 4200M/ 4GB/ 500 GB/Win8/2GB Graph ATI Sunpro 8570 - Rs 44969 www.flipkart.com/lenovo-essential-g510-59-398452-laptop-4th-gen-ci5-4gb-500gb-win8-2gb-graph/p/itmdp26eprwf5k5v?gclid=CMnh99GA2LoCFaRU4godNiUAGQ&semcmpid=sem_7847244212_laptopsnew_goog&tgi=sem%2C1%2CG%2C7847244212%2Cg%2Csearch%2C%2C24387103114%2C1t1%2Cb%2C%2Blenovo+%2Bg510%2F59+%2B398452%2Cc%2C%2C%2C%2C%2C%2C%2C2 3) HP Pavilion G6-2303TX Laptop (3rd Gen Ci5 3230M/ 4GB/ 500GB/ DOS/ 1GB Graph) - Rs 40500 www.flipkart.com/hp-pavilion-g6-2303tx-laptop-3rd-gen-ci5-4gb-500gb-dos-1gb-graph/p/itmdm6yzh4gr4cxd?pid=COMDM6YHWMGDRDEZ&ref=1d2b85fc-a03d-4c7d-844b-ec9e8dc95a81 4) HP Pavilion 15-E039TX Laptop (3rd Gen Ci5 3230M/ 4GB/ 1TB/ Win8/ 2GB Graph) - Rs 46690 www.flipkart.com/hp-pavilion-15-e039tx-laptop-3rd-gen-ci5-4gb-1tb-win8-2gb-graph/p/itmdn4d9wykhdcpz?pid=COMDN4CZGFMGJNTN&ref=1d2b85fc-a03d-4c7d-844b-ec9e8dc95a81 Now I am confused between: Which Processor and chipset is best? How much graphic card is enough? (Not a gamer) Is any of this laptop future proof i.e. it should at least support upcoming latest programming softwares which eats more processor and memory. Laptop will be mainly used for multiprocessing.It should be at least capable for following: Visual Studio 2012 and the upcoming versions for at least 4 years SQL server 2008 R2 and above Sharepoint Blend Photoshop Kindly suggest. If anyone know any good laptop with good configuration in the 50k budget kindly suggest. Thanks in advance.

    Read the article

  • Database Mirroring of SQL server

    - by jbp117
    I have two databases that are mirrored to another server using database mirroring. The mirror server has to be down for some reason for few days. Now the production server is having principal databases in (PRINCIPAL/DISCONNECTED) State. Clients can access those databases. So what happens when they keep on adding data to these databases?? Will the data get committed or waits till the mirror comes up?

    Read the article

  • How to migrate the data directory for MSSQL Server?

    - by Ryan
    I have an installation of MSSQL where I would like to move the data directory to another drive so that all the existing databases are located there and all new databases are created there, as well as the backups, logs, etc. I know I can detach/attach the existing databases, but what about the rest of the settings (backup, new databases)? Is this possible without an uninstall/reinstall? Thank you.

    Read the article

  • Why ruby has to_s and inspect?

    - by prosseek
    The p calls inspect, and puts/print calls to_s for representing its object. If I run class Graph def initialize @nodeArray = Array.new @wireArray = Array.new end def to_s # called with print / puts "Graph : #{@nodeArray.size}" end def inspect # called with p "G" end end if __FILE__ == $0 gr = Graph.new p gr print gr puts gr end I get G Graph : 0Graph : 0 Then, why does ruby has two functions do the same thing? What makes the difference between to_s and inspect? If I comment out the to_s or inspect function, I get as follows. ##

    Read the article

  • C++ code parser/processor library

    - by uray
    is there any library that parse a source code of C++ to produce lets say, call graph, class inheritance tree, flow control, class member list or anything as a ready to use graph or structure in code (not in diagram image). to make it more clear, suppose to generate call graph image, there will be a process like this: ` C++ source -> parser -> intermediate structure -> renderer -> call graph image ^ | [i need this] `

    Read the article

  • Cross-database transactions from one SP

    - by Michael Bray
    I need to update multiple databases with a few simple SQL statement. The databases are configurared in SQL using 'Linked Servers', and the SQL versions are mixed (SQL 2008, SQL 2005, and SQL 2000). I intend to write a stored procedure in one of the databases, but I would like to do so using a transaction to make sure that each database gets updated consistently. Which of the following is the most accurate: Will a single BEGIN/COMMIT TRANSACTION work to guarantee that all statements across all databases are successful? Will I need multiple BEGIN TRANSACTIONS for each individual set of commands on a database? Are transactions even supported when updating remote databases? I would need to execute a remote SP with embedded transaction support. Note that I don't care about any kind of cross-database referential integrity; I'm just trying to update multiple databases at the same time from a single stored procedure if possible. Any other suggestions are welcome as well. Thanks!

    Read the article

  • Using amCharts in Ruby on Rails

    - by Dexter
    I have followed this tutorial in order to use amChart and it worked with no problems , now I am trying to generate a chart with amCharts to show each user and the sign in count but i cant make it work because it not getting the data correctly, what i am missing here ? how can i show user email and sign_in_count ? Users_controller.rb class UsersController < ApplicationController load_and_authorize_resource def index @users = User.all respond_to do |format| format.html # index.html.erb format.json { render :json => @users } end end def show @user = User.find(params[:id]) end def new @user = User.new end def create @user = User.new(params[:user]) if @user.save flash[:notice] = 'A new user created successfully.' redirect_to users_path else flash[:error] = 'An error occurred please try again!' redirect_to users_path end end def edit @user = User.find(params[:id]) end def update @user = User.find(params[:id]) if @user.update_attributes(params[:user]) flash[:notice] = 'Profile updated' redirect_to users_path else render 'edit' end end def destroy @user = User.find(params[:id]) if current_user == (@user) flash[:error] = "Admin suicide warning: Can't delete yourself." else @user.destroy flash[:notice] = 'User deleted' redirect_to users_path end end def checkname if User.where('user_name = ?', params[:user]).count == 0 render :nothing => true, :status => 200 else render :nothing => true, :status => 409 end return end end Users_helper.rb module UsersHelper def convert_to_amcharts_json(data_array) data_array.to_json.gsub(/\"text\"/, "text").html_safe end end index.html.erb <div id="chartdiv" style="width: 100%; height: 400px;"></div> <script type="text/javascript"> var chart; var chartData = <%= convert_to_amcharts_json(@users) %>; AmCharts.ready(function () { // SERIAL CHART chart = new AmCharts.AmSerialChart(); chart.dataProvider = chartData; chart.categoryField = "email"; // the following two lines makes chart 3D chart.depth3D = 20; chart.angle = 30; // AXES // category var categoryAxis = chart.categoryAxis; categoryAxis.labelRotation = 90; categoryAxis.dashLength = 5; categoryAxis.gridPosition = "start"; // value var valueAxis = new AmCharts.ValueAxis(); valueAxis.title = "Most Active users"; valueAxis.dashLength = 5; chart.addValueAxis(valueAxis); // GRAPH var graph = new AmCharts.AmGraph(); graph.valueField = "sign_in_count"; graph.colorField = "color"; graph.balloonText = "<span style='font-size:14px'>[[category]]: <b>[[value]]</b></span>"; graph.type = "column"; graph.lineAlpha = 0; graph.fillAlphas = 1; chart.addGraph(graph); // CURSOR var chartCursor = new AmCharts.ChartCursor(); chartCursor.cursorAlpha = 0; chartCursor.zoomable = false; chartCursor.categoryBalloonEnabled = false; chart.addChartCursor(chartCursor); // WRITE chart.write("chartdiv"); }); </script>

    Read the article

  • How to notify client about updated UpdatePanel content on server side

    - by csh1981
    I have a problem with UpdatePanel.Update() which works initially but then stops. I have tumbled with this problem for some time and some background is needed so please read ahead. I have an ASP.net application in which I have a subpage that display computed information in graphs. Each graph is embedded in an UpdatePanel. The graph is a user control that uses the standard asp:Chart for display. My task is to enable this page with AJAX capabilities so the page is responsive during postbacks. When I access this page from another page, during the initial page rendering, I use a wait dialog for each graph and a pageload event on the client side. In the client event, a hidden button is clicked which a server event handles (the hidden button is inside an UpdatePanel so the postback is asynchronous). Each graph is computed and the UpdatePanels are in turn updated with the Chart content. This is done using UpdatePanel.Update. And it is successful. However, I also have some RadioButtons on the page. These are dynamically created. The purpose of them is to switch graph type --- to show the same data in a different way. Same type of time consuming computation is needed in order to do so. I subscribe on each RadioButton's OnCheckedChanged event and the postback is asynchronous since the radiobuttons are inside an UpdatePanel. In the server event handler I determine the type of graph and use this as an input to the Chart control. I then remove the old Chart control from my Panel and adds new Chart and then I call UpdatePanel.Update(). But with no success. Nothing happens, no errors, nothing. Why is this?? I think this is strange because if I compute every Chart data in the initial rendering instead of using the "Wait dialog"-solution described earlier then I can select graph types successfully and all subsequent AJAX requests work as intended. Also, the same code (computing the chart, removal, and adding the Chart control to Panel and UpdatePanel.Update()) is hit during the initial rendering of the page, and it works only the first time. Here is the method that computes the graph and adds it to the panel and update the UpdatePanel: public void UpdateGraph(GraphType type, GraphMapper mapper) { //Panel is the content of UpdatePanelGraph's Panel.Controls.Clear(); chart = new Chart(type, mapper); //Computation happens inside here panel.Controls.Add(chart); //UpdatePanelGraph is in UpdateMode Conditional and has //ChildrenAsTriggers set to false UpdatePanelGraph.Update(); } I really need a way for these radiobuttons to work, possible using some clientside JavaScript or another way of handling things on the server side. I have thought about using a JavaScript postback call on the UpdatePanel instead of the UpdatePanel.Update(). However, the issue I have here is how to notify the client side when the server side is finished with computing the graph? An plausible explanation of the strange behavior is also much appreciated. Any help appreciated, thanks

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • Exadata support for ACFS (and thus, 10gR2) now available!

    - by Robert Freeman
    Really? Exadata, ACFS and 10gR2? If you work with Exadata you are probably aware that ACFS has not been supported - until now! ACFS is now supported on Exadata if you are running Grid Infrastructure version 12.1.0.2 or later. This new support is described in MOS note 1326938.1. Also Exadata support for ACFS is mentioned in MOS note 888828.1, which is the king of all Exadata notes on MOS. The upshot is that you can now run Oracle Database 10gR2 on Exadata using ACFS as the storage for the Oracle Database. Don’t Over React and just Throw Everything on ACFS!First, let’s be clear that ACFS is not an alternative for running your Exadata databases on ASM. If you are running any production or non-production performance sensitive Oracle databases on 11.2 or 12.1, then you should be running them on ASM disks that are associated with the storage cells. The use case for ACFS is generally limited to the following: Running any Oracle 10gR2 databases on Exadata. Running Oracle 11gR2 development or test databases that require rapid cloning, and that do not require the performance benefits of the Exadata storage cells. If you are running Oracle Database 12c and you need snapshot/clone kinds of capabilities, then you should be using Oracle MultiTennant and the features present in that option (remember though that MultiTennant is a licensed option). The Fine PrintThere are some requirements that you will need to meet If you are going to run ACFS on Exadata. These are: You have to use Oracle Linux You must use GI 12.1.0.2 or later If you wish to use HCC then you must apply the fix for bug 19136936 to your system. This bug, and it’s associated patch do not appear on MOS (as of the time that I wrote this) so you will need to open an SR and get support to provide the patch for you. The Best Use Case for ACFSEven though Oracle Database 10gR2 is at end of life, it remains in use in a large number of places. This has caused problems when choosing to implement Exadata as a consolidation platform, or when choosing it during a hardware refresh process. Now that ACFS is supported, Exadata has become even more flexible and affords customers greater flexibility when migrating to Exadata and Engineered Systems. While all of the features of Exadata might not be available to a 10.2.0.4 database, certainly just the improved processing capabilities of Exadata with its fast as heck infiniband network fabric, additional memory, reduced power requirements and a whole host of other features, justifies moving these databases to Exadata now. This will also make it easier to upgrade these databases when the time comes!

    Read the article

  • Big Data – Learning Basics of Big Data in 21 Days – Bookmark

    - by Pinal Dave
    Earlier this month I had a great time to write Bascis of Big Data series. This series received great response and lots of good comments I have received, I am going to follow up this basics series with further in-depth series in near future. Here is the consolidated blog post where you can find all the 21 days blog posts together. Bookmark this page for future reference. Big Data – Beginning Big Data – Day 1 of 21 Big Data – What is Big Data – 3 Vs of Big Data – Volume, Velocity and Variety – Day 2 of 21 Big Data – Evolution of Big Data – Day 3 of 21 Big Data – Basics of Big Data Architecture – Day 4 of 21 Big Data – Buzz Words: What is NoSQL – Day 5 of 21 Big Data – Buzz Words: What is Hadoop – Day 6 of 21 Big Data – Buzz Words: What is MapReduce – Day 7 of 21 Big Data – Buzz Words: What is HDFS – Day 8 of 21 Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21 Big Data – Buzz Words: What is NewSQL – Day 10 of 21 Big Data – Role of Cloud Computing in Big Data – Day 11 of 21 Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21 Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21 Big Data – Operational Databases Supporting Big Data – Columnar, Graph and Spatial Database – Day 14 of 21 Big Data – Data Mining with Hive – What is Hive? – What is HiveQL (HQL)? – Day 15 of 21 Big Data – Interacting with Hadoop – What is PIG? – What is PIG Latin? – Day 16 of 21 Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21 Big Data – Basics of Big Data Analytics – Day 18 of 21 Big Data – How to become a Data Scientist and Learn Data Science? – Day 19 of 21 Big Data – Various Learning Resources – How to Start with Big Data? – Day 20 of 21 Big Data – Final Wrap and What Next – Day 21 of 21 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Firefox throwing a exception with HTML Canvas putImageData

    - by mr.doob
    So I was working on this little javascript experiment and I needed a widget to track the FPS of it. I ported a widget I've been using with Actionscript 3 to Javascript and it seems to be working fine with Chrome/Safari but on Firefox is throwing an exception. This is the experiment: Depth of Field This is the error: [Exception... "An invalid or illegal string was specified" code: "12" nsresult: "0x8053000c (NS_ERROR_DOM_SYNTAX_ERR)" location: "http://mrdoob.com/projects/chromeexperiments/depth_of_field__debug/js/net/hires/debug/Stats.js Line: 105"] The line that is complaning about is this one: graph.putImageData(graphData, 1, 0, 0, 0, 69, 50); Which is a crappy code to "scroll" the bitmap pixels. The idea is that I only draw a few pixels on the left of the bitmap and then on the next frame I copy the whole bitmap and paste it on pixel to the right. This error usually is thrown because you're pasting a bitmap bigger than the source and it's going off the limits, but in theory that shouldn't be the case as I'm defining 69 as the width of the rectangle to paste (being the bitmap 70px wide). And this is full code: var Stats = { baseFps: null, timer: null, timerStart: null, timerLast: null, fps: null, ms: null, container: null, fpsText: null, msText: null, memText: null, memMaxText: null, graph: null, graphData: null, init: function(userfps) { baseFps = userfps; timer = 0; timerStart = new Date() - 0; timerLast = 0; fps = 0; ms = 0; container = document.createElement("div"); container.style.fontFamily = 'Arial'; container.style.fontSize = '10px'; container.style.backgroundColor = '#000033'; container.style.width = '70px'; container.style.paddingTop = '2px'; fpsText = document.createElement("div"); fpsText.style.color = '#ffff00'; fpsText.style.marginLeft = '3px'; fpsText.style.marginBottom = '-3px'; fpsText.innerHTML = "FPS:"; container.appendChild(fpsText); msText = document.createElement("div"); msText.style.color = '#00ff00'; msText.style.marginLeft = '3px'; msText.style.marginBottom = '-3px'; msText.innerHTML = "MS:"; container.appendChild(msText); memText = document.createElement("div"); memText.style.color = '#00ffff'; memText.style.marginLeft = '3px'; memText.style.marginBottom = '-3px'; memText.innerHTML = "MEM:"; container.appendChild(memText); memMaxText = document.createElement("div"); memMaxText.style.color = '#ff0070'; memMaxText.style.marginLeft = '3px'; memMaxText.style.marginBottom = '3px'; memMaxText.innerHTML = "MAX:"; container.appendChild(memMaxText); var canvas = document.createElement("canvas"); canvas.width = 70; canvas.height = 50; container.appendChild(canvas); graph = canvas.getContext("2d"); graph.fillStyle = '#000033'; graph.fillRect(0, 0, canvas.width, canvas.height ); graphData = graph.getImageData(0, 0, canvas.width, canvas.height); setInterval(this.update, 1000/baseFps); return container; }, update: function() { timer = new Date() - timerStart; if ((timer - 1000) > timerLast) { fpsText.innerHTML = "FPS: " + fps + " / " + baseFps; timerLast = timer; graph.putImageData(graphData, 1, 0, 0, 0, 69, 50); graph.fillRect(0,0,1,50); graphData = graph.getImageData(0, 0, 70, 50); var index = ( Math.floor(Math.min(50, (fps / baseFps) * 50)) * 280 /* 70 * 4 */ ); graphData.data[index] = graphData.data[index + 1] = 256; index = ( Math.floor(Math.min(50, 50 - (timer - ms) * .5)) * 280 /* 70 * 4 */ ); graphData.data[index + 1] = 256; graph.putImageData (graphData, 0, 0); fps = 0; } ++fps; msText.innerHTML = "MS: " + (timer - ms); ms = timer; } } Any ideas? Thanks in advance.

    Read the article

  • Long labels appear to be hidden with "..." - MS Chart Pie Graph control

    - by Mike
    I would like the labels to be completely visible, and if necessary, just spin the pie chart so that the text will fit without being hidden with "...". Here is an example Anyone know how to fix this so it is not shortened? This is the control on my asp page. <asp:CHART ID="Chart1" runat="server" BorderColor="181, 64, 1" BorderDashStyle="Solid" BorderWidth="2" Height="371px" ImageLocation="~/TempImages/ChartPic_#SEQ(300,3)" ImageType="Png" Palette="None" Width="693px" BorderlineColor=""> <legends> <asp:Legend BackColor="Transparent" Enabled="False" Font="Trebuchet MS, 8.25pt, style=Bold" IsTextAutoFit="True" Name="Default"> </asp:Legend> </legends> <series> <asp:Series ChartArea="ChartArea1" ChartType="Pie" Legend="Default" Name="Series1" CustomProperties="PieLabelStyle=Outside, PieDrawingStyle=Concave" YValuesPerPoint="6" Font="Trebuchet MS, 8.25pt, style=Bold"> <SmartLabelStyle AllowOutsidePlotArea="No" MaxMovingDistance="100" /> </asp:Series> </series> <chartareas> <asp:ChartArea BackColor="#DEEDF7" BackGradientStyle="TopBottom" BackSecondaryColor="White" BorderColor="64, 64, 64, 64" BorderDashStyle="Solid" Name="ChartArea1" ShadowColor="Transparent"> <Area3DStyle Enable3D="True" IsRightAngleAxes="False" /> </asp:ChartArea> </chartareas> </asp:CHART> Thanks.

    Read the article

  • Java heap keeps on shrinking! What is happening in this graph of heap size?

    - by chillitom
    Hi Guys, This is a screen shot of a JVM (win64, 6u17) running ActiveMQ, after every garbage collection the heap size is reducing. As the heap size reduces garbage collection gets more frequent and the heap reduces more quickly. Eventually the VM locks up as it's spending all it's time in GC. -Xms is the default and -Xmx is 2048mb. What is happening!!? How can I avoid this? http://imagebin.org/92614 n.b originally posted on serverfault.com, moved to stackoverflow.com as requested

    Read the article

  • Algorithm for nice graph labels for time/date axis?

    - by Aaron
    Hello, I'm looking for a "nice numbers" algorithm for determining the labels on a date/time value axis. I'm familar with Paul Heckbert's Nice Numbers algorithm (http://tinyurl.com/5gmk2c). I have a plot that displays time/date on the X axis and the user can zoom in and look at a smaller time frame. I'm looking for an algorithm that picks nice dates to display on the ticks. For example: Looking at a day or so: 1/1 12:00, 1/1 4:00, 1/1 8:00... Looking at a week: 1/1, 1/2, 1/3... Looking at a month: 1/09, 2/09, 3/09... The nice label ticks don't need to correspond to the first visible point, but close to it. Is anybody familar with such an algorithm? Thanks

    Read the article

  • Survey: How much data do you work with?

    - by James Luetkehoelter
    Andy isn't the only one that can ask a survey question. This is something I really curious about because many of the answers or recommendations or rants in blogs are not universably applicable to every database - small databases must sometimes be treated differently, and uber databases are just a pain (and fun at the same time). So, how would you classify most of the databases you work with: 1) Up to 50GB 2) 50-500GB 3) 500GB - 2TB 4) DEAR GOD THAT"S TOO MUCH INFORMATION! Share this post: email it!...(read more)

    Read the article

  • How can I manage SQL CE databases in SQL Server Management Studio?

    - by Edward Tanguay
    I created a SDF (SQL CE) database with Visual Studio 2008 (Add / New Item / Local Database). Is it possible to edit this database with SQL Server Management Studio? I tried to attach it but it only offered .mdf and attaching a .sdf file results in "failed to retrieve data for this request". If so, is it possible to create SDF files with Management Studio as well? Or are we stuck with the simple interface of the Visual Studio 2008 database manager?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >