Search Results

Search found 62759 results on 2511 pages for 'view first'.

Page 496/2511 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • Codeigniter redirect repeats controller name in URL

    - by Obay
    This is my controller: class Timesheet extends Controller { ... function index() { //loads view with a form that submits to "timesheet/change_date" } function summary() { //loads view with a form that submits to "timesheet/change_week" } function change_date() { ... redirect('timesheet'); } function change_week() { ... redirect('timesheet/summary'); } ... } The first form is located at http://localhost/dts/index.php/timesheet and when I submit the change_date form, it correctly goes thru the change_date() function and re-loads http://localhost/dts/index.php/timesheet correctly. However, the second form is located at http://localhost/dts/index.php/timesheet/summary, and when I submit the change-week form, it goes thru the change_week() function but goes to http://localhost/dts/index.php/timesheet/timesheet/change_week. Notice the word timesheet is repeated. When I submit the form again, another timesheet is added. What's wrong and how do I improve my code? My .htaccess is below: RewriteEngine on RewriteCond $1 !^(index\.php|webroot|robots\.txt) RewriteRule ^(.*)$ index.php/$1 [L]

    Read the article

  • ASP:LinkButton and Eval

    - by sgibbons
    I'm using an ASP:LinkButton inside of an ItemTemplate inside of a TemplateField in a GridView. For the command argument for the link button I want to pass the ID of the row from the datasource that the gridview is bound to, so I'm doing something like this: <asp:LinkButton ID="viewLogButton" CommandName="viewLog" CommandArgument="<%#Eval("ID")%>" Text="View Log" runat="server"/> Unfortunately, the resulting HTML is this: <asp:LinkButton ID="viewLogButton" CommandName="viewLog" CommandArgument="3" Text="View Log" runat="server"/> It seems that it is parsing the Eval() properly, but this is somehow causing it not to parse the LinkButton tag and just dump it out as literal text. Does anyone know: a) why this is happening and, b) what a good solution to this problem is?

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • MKMapView not calling delegate methods

    - by criscokid
    In a UIViewController I add a MKMapView to the view controlled by the controller. - (void)viewDidLoad { [super viewDidLoad]; CGRect rect = CGRectMake(0, 0, 460, 320); map = [[MKMapView alloc] initWithFrame:rect]; map.delegate = self; [self.view addSubview:map]; } Later in the controller I have - (void)mapViewDidFinishLoadingMap:(MKMapView *)mapView { NSLog(@"done."); } Done never gets printed. None of the other delegate methods get called either like mapView:viewForAnnotation: I use a MKMapView in an another app, but this seems to happen on any new application I make. Has anyone else seen this behavior? EDIT: The problem seems to be when UIViewController is made the delegate of the MKMapView, a direct subclass of NSObject seems to work okay. I can work around like this, still seems very odd since I've done it before.

    Read the article

  • SurfaceView for Camera Preview won't get destroyed when pressing Power-Botton

    - by for3st
    I want to implement a camera preview. For that I have a custom View CameraView extends ViewGroup that in the constructor programatically creates an surfaceView. I have the following components (higly simplified for beverity): ScannerFragment.java public View onCreateView(..) { //inflate view and get cameraView } public void onResume() { //open camera -> set rotation -> startPreview (in a thread) -> //set preview callback -> start decoding worker } public void onPause() { // stop decoding worker -> stop Preview -> release camera } CameraView.java extends ViewGroup public void setUpCalledInConstructor(Context context) { //create a surfaceview and add it to this viewgroup -> //get SurfaceHolder and set callback } /* SurfaceHolder.Callback */ public void surfaceCreated(SurfaceHolder holder) { camera.setPreviewDisplay(holder); } public void surfaceDestroyed(SurfaceHolder holder) { //NOTHING is done here } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { camera.getParameters().setPreviewSize(previewSize.width, previewSize.height); } fragment_scanner.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent"> <com.myapp.camera.CameraView android:id="@+id/cameraPreview" android:layout_width="match_parent" android:layout_height="match_parent"/> </RelativeLayout> I think I have set the lifecycle correct (getting resources onResume(), releasing it onPause() roughly said) and the following works just fine: pressing home and returning pressing Taskswitcher and returning rotation But one thing doesn't work and that is when I press the power-button on the device and then return to the camera-preview. The result is: the preview is stuck with the image that was last captured before button was pressed. If I rotate it works fine again, since it will get through the lifecycle. After some research I found out that this is probably due to the fact that surfaceView won't get destroyed when the power-button is pressed, i.e. SurfaceHolder.Callback.surfaceDestroyed(SurfaceHolder holder) won't be called. And in fact when I compare the (very verbose) log output of the home-button-case and the power-button-case it's the same except that 'surfaceDestroyed' won't get called. So far I found no solution whatsoever to work around it. I purposely avoid any resource cleaning code in my surfaceDestroyed(), but this does not help. My idea was to manually destroy the surfaceView like asked in this question but this seems not possible. I also tested other applications with surfaceViews/cameras and they don't seem to have this issue. So I would appreciate any hints or tips on that.

    Read the article

  • ListView and undeterminate ProgressBar

    - by Spredzy
    Sorry for the question it might sounds stupid, But: how do you activate a ProgressBar according to the cell you just cliked on ? I have a list view, with a menu that shows after a long press. When I click on one of my option I would like to display the ProgressBar in the listView according to the cell I clicked on. It is currently always displaying the one of my first cell, whatever I am doing public boolean onContextItemSelected(MenuItem item) { AdapterContextMenuInfo info = (AdapterContextMenuInfo) item.getMenuInfo(); switch (item.getOrder()) { case 0: mProgressBar = (ProgressBar)findViewById(R.id.welcome_progressbar); mProgressBar.setVisibility(View.VISIBLE); ... some execution .... return true; case 1: ... Does anyone see anything wrong?

    Read the article

  • Displaying the Time in AM/PM format in android

    - by Rahul Varma
    Hi, I am getting the time using a time picker and displaying the time in the text view using following code... private TimePickerDialog.OnTimeSetListener mTimeSetListener = new TimePickerDialog.OnTimeSetListener() { public void onTimeSet(TimePicker view, int hourOfDay, int minute) { Toast.makeText(SendMail.this, "Your Appointment time is "+hourOfDay+":"+minute, Toast.LENGTH_SHORT).show(); TextView datehid = (TextView)findViewById(R.id.timehidden); datehid.setText(String.valueOf(hourOfDay)+ ":"+(String.valueOf(minute))); } }; Now, the issue is when i set the time as 8:00 pm then i get the time displayed as 20:0... I want to display the time as 8:00 pm... How can i do that???

    Read the article

  • Handling undo and edit flag on window with several model objects

    - by Rui Pacheco
    Hi, I've a window that will hold several instances of a model object, listed in a table. The model object is updated using a text view. What is the best way to keep the edit flag and undo manager in synch with the content of the different model objects? I'm thinking creating an instance of the undo manager on the model object and manually set the undo manager for the text view every time the user choses a new model object. Does the undo manager also handle the edited flag?

    Read the article

  • MVC best practice

    - by Patrick
    I'm new to MVC (i'm using codeigniter) and was wondering where I should put a "cut_description" function. My model retrieves a list of events including their description. If the description is too long, I would need to cut it after the first n words, and add a "read more" link, so the view doesn't get too cluttered. What would be the best practice? a) add the logic to cut after n words to the model; b) add the logic to the controller; c) add it to the view? I think C would be the easier (I have to loop through results anyway), but I'm not sure this would comply with MVC. What do you think?

    Read the article

  • ViewStateMode property in Asp.Net 4.0

    - by AspOnMyNet
    I haven’t yet started learning Asp.Net 4.0, but I did read a bit on ViewState, where there is a new property ViewStateMode. In earlier versions of Asp.Net, if parent control had its ViewState disabled, then child controls also had their ViewState disabled, even if their EnableViewState was set to true. a) Thus if I understand it correctly, then a child control C having ViewStateMode property set to “Enable” causes C to save its view state, even if parent control has its view state disabled? b) Is there a reason why ViewStateMode property hasn’t/couldn’t be implemented in earlier versions of Asp.Net? thanx

    Read the article

  • Add image gallery to node in Drupal

    - by vian
    I'm using Image module for Drupal 6 and the submodule of Image Gallery. Image gallery is identified by taxonomy term of certain vocabulary. What I need is to attach Image Gallery View supplied with Image Gallery to certain node. I am now trying to use Views Attach to do this. But how can I pass an argument of taxonomy term for gallery to the view? Also I added the taxonomy term for gallery to be defined for both images and my node type. Is it ok?

    Read the article

  • Ios Parse Login

    - by user3806600
    I am programming an IOS app that will use parse to login a user. Using storyboard I have the login button connected to another view controller with the push segue. Whether the username and password are correct the button selection always goes to the new view controller. I might be doing this all wrong. Any help is appreciated. Here is my code: - (IBAction)signIn:(id)sender { [PFUser logInWithUsernameInBackground:self.emailField.text password:self.passwordField.text block:^(PFUser *user, NSError *error) { if (!error) { [self performSegueWithIdentifier:@"signIn" sender:nil]; } else { // The login failed. Check error to see why. } }]; }

    Read the article

  • How to move an element in a sorted list and keep the CouchDb write "atomic"

    - by karlthorwald
    I have elements of a list in couchdb documents. Let's say these are 3 elements in 3 documents: { "id" : "783587346", "type" : "aList", "content" : "joey", "sort" : 100.0 } { "id" : "358734ff6", "type" : "aList", "content" : "jill", "sort" : 110.0 } { "id" : "abf587346", "type" : "aList", "content" : "jack", "sort" : 120.0 } A view retrieves all "aList" documents and displays them sorted by "sort". Now I want to move the elements, when I want to move "jack" to the middle, I could do this atomic in one write and change it's sort key to 105.0. The view now returns the documents in the new sort order. After a lot of sorting I could end up with sort keys like 50.99999 and 50.99998 after some years and in extreme situations run out of digits? What can you recommend, is there a better way to do this? I'd rather keep the elements in seperate documents. Different users might edit different elements in parallel (which also can get tricky). Maybe there is a much better way?

    Read the article

  • simile exhibit ie7 bug

    - by two7s_clash
    'm using to Exhibit to, pretty simply, display some data on historical events and book publication dates: http://f1shw1ck.com/timeline3/exhibit.html Everything works fine, and I have been able to get the timeline running as wanted, until I try to take control of the timeline with a timelineConfig script. After I add this, my timeline continues to work as expected on all browsers, except for IE. Curiously enough, band 0 initializes and is rendered correctly, but band 1 shows no events and does not sync to the band above. But it is picking up the width: 10%", intervalUnit: Timeline.DateTime.DECADE, intervalPixels: 60 specifications, as they are rendered correctly. Since everything works until I call ex:configuration="timelineConfig", since everything works in the other browsers, and since everything almost works in IE, up to the event painting, I have to imagine this is a javascript coding error on my part, but I just can't see it. All I get in my console is Failed to create view View As Timeline." Incidentally, having or not having the gotoYear function seemingly does nothing to change... Thanks for any tips.

    Read the article

  • How do I layout an image that will be dynamically sized in interface builder?

    - by Tony
    I am having trouble laying out scrollable view in interface builder. A screen shot of my layout is here. As you can see the layout is pretty simple. The UIImageView above the text will not have a specific height. It may be 100px high or 300px high. This is why the view is scrollable. I am experiencing two problems with this layout: 1) For some reason the image will sit behind the text instead of pushing it down. Take a look here. 2) The obvious other problem is that the upper most UIImage and UILabel are getting pushed up off of the screen. I am thinking this has to do with the UIScrollView but I haven't been able to figure out why. Thanks!

    Read the article

  • Find the xmlHttpRequest resource in aspx page

    - by DJStroky
    I'm trying to find an xmlHttpRequest or similar resource that I can query directly to obtain an xml file for my own purposes. At this site it is possible to browse a Google Map mashup with markers. Unfortunately it is only possible to view all markers at a small view range, whereas I simply want to obtain all the information at once for the entire state. Using the firebug console in firefox I can enter the following javascript line to obtain the xml that I desire: document.getElementById("stationXML").value I tracked this document element down to an <input> tag, but I can't figure out how that input is set. Thanks in advance for your help!

    Read the article

  • RewriteCond and Full QUERY_STRING

    - by Tim
    I'm having hard time getting my head wrapped around this one - and it should be trivial. I would like to redirect one URL with a specific query string to another URL. I want to send any requests that contain the query string in the URL http://mysite.com/index.php?option=com_user&view=register To: http://mysite.com/index.php?option=com_regme&view=form&regme=4&random=0&Itemid=6 If they add anything to the end of the first URL, it should still go to the second URL so that they cannot bypass the redirection. Nothing in the first query string needs to be preserved and passed to the second - all I want to do is change the URL completely. I'm tearing my hair out trying to get this to work yet it should be trivial. Suggestions? Thanks, -Tim

    Read the article

  • Scheduling thread tiles with C++ AMP

    - by Daniel Moth
    This post assumes you are totally comfortable with, what some of us call, the simple model of C++ AMP, i.e. you could write your own matrix multiplication. We are now ready to explore the tiled model, which builds on top of the non-tiled one. Tiling the extent We know that when we pass a grid (which is just an extent under the covers) to the parallel_for_each call, it determines the number of threads to schedule and their index values (including dimensionality). For the single-, two-, and three- dimensional cases you can go a step further and subdivide the threads into what we call tiles of threads (others may call them thread groups). So here is a single-dimensional example: extent<1> e(20); // 20 units in a single dimension with indices from 0-19 grid<1> g(e);      // same as extent tiled_grid<4> tg = g.tile<4>(); …on the 3rd line we subdivided the single-dimensional space into 5 single-dimensional tiles each having 4 elements, and we captured that result in a concurrency::tiled_grid (a new class in amp.h). Let's move on swiftly to another example, in pictures, this time 2-dimensional: So we start on the left with a grid of a 2-dimensional extent which has 8*6=48 threads. We then have two different examples of tiling. In the first case, in the middle, we subdivide the 48 threads into tiles where each has 4*3=12 threads, hence we have 2*2=4 tiles. In the second example, on the right, we subdivide the original input into tiles where each has 2*2=4 threads, hence we have 4*3=12 tiles. Notice how you can play with the tile size and achieve different number of tiles. The numbers you pick must be such that the original total number of threads (in our example 48), remains the same, and every tile must have the same size. Of course, you still have no clue why you would do that, but stick with me. First, we should see how we can use this tiled_grid, since the parallel_for_each function that we know expects a grid. Tiled parallel_for_each and tiled_index It turns out that we have additional overloads of parallel_for_each that accept a tiled_grid instead of a grid. However, those overloads, also expect that the lambda you pass in accepts a concurrency::tiled_index (new in amp.h), not an index<N>. So how is a tiled_index different to an index? A tiled_index object, can have only 1 or 2 or 3 dimensions (matching exactly the tiled_grid), and consists of 4 index objects that are accessible via properties: global, local, tile_origin, and tile. The global index is the same as the index we know and love: the global thread ID. The local index is the local thread ID within the tile. The tile_origin index returns the global index of the thread that is at position 0,0 of this tile, and the tile index is the position of the tile in relation to the overall grid. Confused? Here is an example accompanied by a picture that hopefully clarifies things: array_view<int, 2> data(8, 6, p_my_data); parallel_for_each(data.grid.tile<2,2>(), [=] (tiled_index<2,2> t_idx) restrict(direct3d) { /* todo */ }); Given the code above and the picture on the right, what are the values of each of the 4 index objects that the t_idx variables exposes, when the lambda is executed by T (highlighted in the picture on the right)? If you can't work it out yourselves, the solution follows: t_idx.global       = index<2> (6,3) t_idx.local          = index<2> (0,1) t_idx.tile_origin = index<2> (6,2) t_idx.tile             = index<2> (3,1) Don't move on until you are comfortable with this… the picture really helps, so use it. Tiled Matrix Multiplication Example – part 1 Let's paste here the C++ AMP matrix multiplication example, bolding the lines we are going to change (can you guess what the changes will be?) 01: void MatrixMultiplyTiled_Part1(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M, N, vC); 07: parallel_for_each(c.grid, 08: [=](index<2> idx) restrict(direct3d) { 09: 10: int row = idx[0]; int col = idx[1]; 11: float sum = 0.0f; 12: for(int i = 0; i < W; i++) 13: sum += a(row, i) * b(i, col); 14: c[idx] = sum; 15: }); 16: } To turn this into a tiled example, first we need to decide our tile size. Let's say we want each tile to be 16*16 (which assumes that we'll have at least 256 threads to process, and that c.grid.extent.size() is divisible by 256, and moreover that c.grid.extent[0] and c.grid.extent[1] are divisible by 16). So we insert at line 03 the tile size (which must be a compile time constant). 03: static const int TS = 16; ...then we need to tile the grid to have tiles where each one has 16*16 threads, so we change line 07 to be as follows 07: parallel_for_each(c.grid.tile<TS,TS>(), ...that means that our index now has to be a tiled_index with the same characteristics as the tiled_grid, so we change line 08 08: [=](tiled_index<TS, TS> t_idx) restrict(direct3d) { ...which means, without changing our core algorithm, we need to be using the global index that the tiled_index gives us access to, so we insert line 09 as follows 09: index<2> idx = t_idx.global; ...and now this code just works and it is tiled! Closing thoughts on part 1 The process we followed just shows the mechanical transformation that can take place from the simple model to the tiled model (think of this as step 1). In fact, when we wrote the matrix multiplication example originally, the compiler was doing this mechanical transformation under the covers for us (and it has additional smarts to deal with the cases where the total number of threads scheduled cannot be divisible by the tile size). The point is that the thread scheduling is always tiled, even when you use the non-tiled model. But with this mechanical transformation, we haven't gained anything… Hint: our goal with explicitly using the tiled model is to gain even more performance. In the next post, we'll evolve this further (beyond what the compiler can automatically do for us, in this first release), so you can see the full usage of the tiled model and its benefits… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • How to add custom hooks to controllers in ASP.NET MVC2

    - by Adrian
    Hi, I've just started a new project in ASP.net 4.0 with MVC 2. What I need to be able to do is have a custom hook at the start and end of each action of the controller. e.g. public void Index() { *** call to the start custom hook to externalfile.cs (is empty so does nothing) ViewData["welcomeMessage"] = "Hello World"; *** call to the end custom hook to externalfile.cs (changes "Hello World!" to "Hi World") return View(); } The View then see welcomeMessage as "Hi World" after being changed in the custom hook. The custom hook would need to be in an external file and not change the "core" compiled code. This causes a problem as with my limited knowledge ASP.net MVC has to be compiled. Does anyone have any advice on how this can be achieved? Thanks

    Read the article

  • Best solution for managing navigation (and marking currently active item) in CakePHP

    - by Nathan
    So I have been looking around for a couple hours for a solid solution to handling site navigation in CakePHP. Over the course of a dozen projects, I have rigged together something that works for each one, but what I'm looking for is ideally a CakePHP plugin that handles the following: Navigation Model Component for handing off to the view Element View Helper for displaying the navigation (with control over sublevels displayed and automatically determining the "active" item based on URL and/or controller/model/slug Admin pages for managing a tree of navigation Any suggestions for an all-in-one solution or even the individual components would be very appreciated! Or even suggestions on how you have handled it in the past

    Read the article

  • Null file when uploading via MVC

    - by Josh
    I have the following form on an .NET MVC View: <form method="post" enctype="multipart/form-data" action="/Video/UploadDocument"> <input type="file" id="document1" name="document1"/> <input type="submit" value="Save"/> </form> And the controller has the following signature that gets called: public ActionResult UploadDocument(HttpPostedFileBase file) { return View(); } When I break inside the UploadDocument method, the parameter 'file' is null. I've selected a valid document on my desktop and know it contains text. What am I missing to get this file upload working?

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >