Search Results

Search found 61631 results on 2466 pages for 'duplicate data'.

Page 202/2466 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Data transfer is extrem slow after partitioning extern usb drive

    - by user125912
    I bought an extern usb 3.0 drive with 500 gb capacity. OS is Windows 7. I use it with an usb 2.0 slot, no prob. Initially I used it without making several partitions and it was fast as hell. Then I had the great idea to make partitions, one for programs, one for data and one for backup. I chose the free EASEUS Partition Master 9.1.1. and ended up with these partitions: F:Apps, primary, NTFS, 100gb H:Data, logic, NTFS, 250gb B:Backup, logic, NTFS, 150gb THE PROBLEM: When I copy files from C: to F: I get a transfer rate of about 100 KB/S ! When I copy files from C: to H: I get a transfer rate of about 4 MB/S ! thats all muuuch to slow, slower then before. What can I do to speed the shit up? Thanks in advance!

    Read the article

  • type mismatch errors querying data from spreadsheet

    - by user2984933
    In EXCEL 2010 I am trying to querying data in another spreadsheet. The data range in the source sheet/ file is named (DATABASE). The Date field in the database is formatted as short date and when I query the date without criteria I get a different format of European datesYYYY-MM-DD with time in the results. When I use criteria and a specific date in the date field criteria grid using English format MM-DD-YYYY I get results. When I set parameters looking at destination file cells for the date for the parameters, I get Type mismatch EVEN THOUGHT THE CELLS ARE Short date Formatted. This worked perfectly in my 2003 version of EXCEL. Now I am running Win 7 -64 and Office 2010 Pro. Why does the query throw Mismatch with cell references for the parameters but accepts hard value dates in any date format? (MSQRY32.EXE)

    Read the article

  • Create text file named after a cell containing other cell data

    - by user143041
    I tried using the code below for the Excel program on my `Mac Mini using the OS X Version 10.7.2 and it keeps saying Error due to file name / path: (The Excel file I am creating is going to be a template with my formulas and macros installed which will be used over and over). Sub CreateFile() Do While Not IsEmpty(ActiveCell.Offset(0, 1)) MyFile = ActiveCell.Value & ".txt" fnum = FreeFile() Open MyFile For Output As fnum Print #fnum, ActiveCell.Offset(0, 1) & " " & ActiveCell.Offset(0, 2) Close #fnum ActiveCell.Offset(1, 0).Select Loop End Sub What Im trying to do: 1st Objective I would like to have the following data to be used to create a text file. A:A is what I need the name of the file to be. B:2 is the content I need in the text file. So, A2 - "repair-video-game-Glassboro-NJ-08028.txt" is the file name and B2 to be the content in the file. Next, A3 is the file name and B3 is the content for the file, etc. ONCE the content reads what is in cell A16 and B16 (length will vary), the file creation should stop, if not then I can delete the additional files created. This sheet will never change. Is there a way to establish the excel macro to always go to this sheet instead of have to select it with the mouse to identify the starting point? 2nd Objective I would like to have the following data to be used to create a text file. A:1 is what I need the name of the file to be. B:B is the content I want in the file. So, A2 - is the file name "geo-sitemap.xml" and B:B to be the content in the file (ignore the .xml file extension in the photo). ONCE the content cell reads what is in cell "B16" (length will vary), the file creation should stop, if not then I can adjust the cells that have need content (formulated content you see in the image is preset for 500 rows). This sheet will never change. Is there a way to establish the excel macro to always go to this sheet instead of have to select it with the mouse to identify the starting point? I can Provide the content in the cells that are filled in by excel formulas that are not not to be included in the .txt files. It is ok if it is not possible. I can delete the extra cells that are not populated (based on the data sheet). Please let me know if you need any more additional information or clarity and I will be happy to provide it.

    Read the article

  • What are my options for a disk with what seems to be a corrupted filesystem?

    - by CT
    I have a friend with an old Dell that will not boot into Windows. It has an IDE drive. It spins up. I have an IDE to USB device. I've attached the drive via that device to a working laptop. The drive does not mount. If I go into Disk Management I can see the drive but it will not initalize, says "Drive not ready." I've also booted into a linux live cd to see if the drive mounts, it does not. I am just trying to recover some pictures from the drive. The data is not important enough to send to a professional. The issue is more of a curosity on how to recover data if and when these situations would occur in the future.

    Read the article

  • Using Excel Lookup Function and Handling Case Where No Matches Exist

    - by Dave
    I'm using Excel to enter data for an event where people register. A high percentage of the registrants will have registered for previous events, so we can their name and ID number. I'm trying to use the LOOKUP function in Excel to lookup the name and then populate the ID field with their ID number. This works well unless the value that is looked up is a new user that we don't already have details for. However, if the LOOKUP function can not find an exact match, it chooses the largest value in the lookup_range that is less than or equal to the value. This causes a problem since you can't tell if the match was exact (and the data is correct) or not exact and the match is incorrect. How do I catch non-matches and handle separately?

    Read the article

  • Backing Up User Data when data is not in use. Should I be concerned?

    - by jberryman
    This may be a dumb question. I would like to use duplicity to make backups to Amazon S3 of directories, each of which contains a different user's data. Each directory could be written to at any time. So I have two questions: Should I be concerned that a scheduled backup of a directory might occur in the middle of data being written to files in the directory, resulting in a corrupted backup? And if that is a valid concern, how would I go about temporarily delaying an operation while IO was happening, to try to minimize that effect. Thanks for the advice

    Read the article

  • Can Kies provide a data connection so my phone can avoid mobile data usage?

    - by Highly Irregular
    I regularly connect my Samsung phone by USB cable to my PC to charge it up, and when I run the Samsung Kies app on the PC I can sync data too. Is it possible for my phone to use my PCs internet connection through the USB link to avoid data charges? I don't have a wifi connection available. It would make sense if Kies could provide this, or perhaps there's some other way I can achieve it without wifi, such as using Bluetooth. I have a Samsung GT-S7500, though this question should apply to many other models.

    Read the article

  • Entire folders deleted from My Documents periodically with remote logins

    - by darron
    I've got a customer who thinks our application is constantly deleting all it's data. It's really becoming a major problem for them. The problem is, there's no way it's us. They are losing not only our entire data folder (which we locate in the user's "My Documents" folder to make it easy to find), but some local settings files which are in entirely different places within the general user profile. It REALLY looks like the entire user is either getting reset, or is somehow synchronizing with a more... blank profile somewhere else. They're running this on some kind of virtualized Citrix guest OS. I see references to a "group policy folder redirection" that could do this... maybe roaming profiles? Any ideas? Help!

    Read the article

  • RAID 0 disk failure, how to recover the RAID?

    - by user7985
    Situation is this. A PC with 2 hard disks, in an RAID 0 Array. The electronics on one of the disks has failed. I can not find the same board for the disk (I've tried this, removed board from the OK disk, and the second, the damaged one, works fine). I've made an image with "dd" on linux on a new hard drive (same size, not same model) and now I get "Offline member" in the RAID config screen. Will I succeed to recover the data which is stored on the drives, any help, any experience with this kind of problem. And surly, I know it was stupid to put the disks in RAID 0 and store data on them :(

    Read the article

  • Recover snap server data

    - by Ugg
    Hi I have a snap server 110 the machine powers on ok and the healthcheck passes but unable to connect no responce on the assigned ip or any ability to reach the device via the snap server manager. Believe the device is powering on but not loading the OS. Tried pulling the disk running and hooking up to a windows PC via USB, and using disk internals linux reader I am unable to access two of the partitions. ( one of which is the large data partition). There are three partitions on the the drice only one is accessible via Linux reader. I am looking to recover the data of the drive can anyone suggest a DIY option please?

    Read the article

  • Slow data transfer using SSH

    - by Floste
    The server is an ubuntu server 11.04 with sshd. SSH works fine for console programs. But data transfer is slow, which is very annoying when transferring large files. I tried two different client programs and changed the port, but the speed is always the same. I know the server can transfer data a lot faster over SSL, which afaik uses AES. I configured my SSH client to use AES, too, but no effect. Why is using SSH multiple times slower than SSL and is there a way to improve transfer speed of SSH?

    Read the article

  • How is WPF Data Binding using Object Data Source in Visual Studio 2010 done?

    - by Rob Perkins
    This is probably mostly a question about how to use the VS 2010 IDE tools in a way the Microsofties didn't specifically intend. But since this is something I immediately tried without success. I have defined a .NET 4.0 WPF Application project with a simple class that looks like this: Public Class Class1 Public Property One As String = "OneString" Public Property Two As String = "TwoString" End Class I then defined it as an "Object Data Source" in VS2010, using the IDE's "Add New Data Source..." feature. This exposes the class members in a GUI element in the IDE as given in this image: Dragging "Class1" from that tool to the surface of "Window1.xaml" in a default "WPF Application" results in the design view looking like this: And generated XAML like this: <Window x:Class="Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="133" Width="170" xmlns:my="clr-namespace:WpfApplication1" mc:Ignorable="d" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" > <Window.Resources> <CollectionViewSource x:Key="Class1ViewSource" d:DesignSource="{d:DesignInstance my:Class1, CreateList=True}" /> </Window.Resources> <Grid DataContext="{StaticResource Class1ViewSource}" HorizontalAlignment="Left" Name="Grid1" VerticalAlignment="Top"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Label Content="One:" Grid.Column="0" Grid.Row="0" HorizontalAlignment="Left" Margin="3" VerticalAlignment="Center" /> <TextBlock Grid.Column="1" Grid.Row="0" Height="23" HorizontalAlignment="Left" Margin="3" Name="OneTextBlock" Text="{Binding Path=One}" VerticalAlignment="Center" /> <Label Content="Two:" Grid.Column="0" Grid.Row="1" HorizontalAlignment="Left" Margin="3" VerticalAlignment="Center" /> <TextBlock Grid.Column="1" Grid.Row="1" Height="23" HorizontalAlignment="Left" Margin="3" Name="TwoTextBlock" Text="{Binding Path=Two}" VerticalAlignment="Center" /> </Grid> Note the data bindings Text="{Binding Path=One}" and Text="{Binding Path=Two}" in the TextBlock elements. Code-behind for Window1.xaml has this in Window_Loaded: Class Window1 Private m_c1 As New Class1 Private Sub Window1_Loaded(ByVal sender As Object, ByVal e As System.Windows.RoutedEventArgs) Handles Me.Loaded Dim Class1ViewSource As System.Windows.Data.CollectionViewSource = CType(Me.FindResource("Class1ViewSource"), System.Windows.Data.CollectionViewSource) 'Load data by setting the CollectionViewSource.Source property: 'Class1ViewSource.Source = [generic data source] Me.DataContext = m_c1 End Sub End Class Running the application produces this output: The expected result was that "OneString" would appear next to "One" and "TwoString" next to "Two" in the running window. The question is: Why didn't this work? What will work instead? If I put bindings in a DataTemplate, it works. Blend, with its sample data stuff, implied that this should work, but it doesn't. I know I'm missing something pretty fundamental here; what is it?

    Read the article

  • JSF 2.0: java based custom component + html table + facelets = data model not updated

    - by mikic
    Hi, I'm having problems getting the data model of a HtmlDataTable to be correctly updated by JSF 2.0 and Facelets. I have created a custom Java-based component that extends HtmlDataTable and dynamically adds columns in the encodeBegin method. @Override public void encodeBegin(FacesContext context) throws IOException { if (this.findComponent("c0") == null) { for (int i = 0; i < 3; i++) { HtmlColumn myNewCol = new HtmlColumn(); myNewCol.setId("c" + i); HtmlInputText myNewText = new HtmlInputText(); myNewText.setId("t" + i); myNewText.setValue("#{row[" + i + "]}"); myNewCol.getChildren().add(myNewText); this.getChildren().add(myNewCol); } } super.encodeBegin(context); } My test page contains the following <h:form id="fromtb"> <test:MatrixTest id="tb" var="row" value="#{MyManagedBean.model}"> </test:MatrixTest> <h:commandButton id="btn" value="Set" action="#{MyManagedBean.mergeInput}"/> </h:form> <h:outputText id="mergedInput" value="#{MyManagedBean.mergedInput}"/> My managed bean class contains the following @ManagedBean(name="MyManagedBean") @SessionScoped public class MyManagedBean { private List model = null; private String mergedInput = null; public MyManagedBean() { model = new ArrayList(); List myFirst = new ArrayList(); myFirst.add(""); myFirst.add(""); myFirst.add(""); model.add(myFirst); List mySecond = new ArrayList(); mySecond.add(""); mySecond.add(""); mySecond.add(""); model.add(mySecond); } public String mergeInput() { StringBuffer myMergedInput = new StringBuffer(); for (Object object : model) { myMergedInput.append(object); } setMergedInput(myMergedInput.toString()); return null; } public List getModel() { return model; } public void setModel(List model) { this.model = model; } public String getMergedInput() { return mergedInput; } public void setMergedInput(String mergedInput) { this.mergedInput = mergedInput; } When invoked, the page is correctly rendered with a table made of 3 columns (added at runtime) and 2 rows (as my data model has 2 rows). However when the user enter some data in the input fields and then click the submit button, the model is not correctly updated and therefore the mergeInput() method creates a sequence of empty strings which is rendered on the same page. I have added some logging to the decode() method of my custom component and I can see that the parameters entered by the user are being posted back with the request, however these parameters are not used to update the data model. If I update the encodeBegin() method of my custom component as follow @Override public void encodeBegin(FacesContext context) throws IOException { super.encodeBegin(context); } and I update the test page as follow <test:MatrixTest id="tb" var="row" value="#{MyManagedBean.model}"> <h:column id="c0"><h:inputText id="t0" value="#{row[0]}"/></h:column> <h:column id="c1"><h:inputText id="t1" value="#{row[1]}"/></h:column> <h:column id="c2"><h:inputText id="t2" value="#{row[2]}"/></h:column> </test:MatrixTest> the page is correctly rendered and this time when the user enters data and submits the form, the underlying data model is correctly updated and the mergeInput() method creates a sequence of strings with the user data. Why does the test case with columns declared in the facelet page works correctly (ie the data model is correctly updated by JSF) where the same does not happen when the columns are created at runtime using the encodeBegin() method? Is there any method I need to invoke or interface I need to extend in order to ensure the data model is correctly updated? I am using this test case to address the issue that is appearing in a much more complex component, therefore I can't achieve the same functionality using a facelet composite component. Please note that this has been done using NetBeans 6.8, JRE 1.6.0u18, GlassFish 3.0. Thanks for your help.

    Read the article

  • Scope quandary with namespaces, function templates, and static data

    - by Adrian McCarthy
    This scoping problem seems like the type of C++ quandary that Scott Meyers would have addressed in one of his Effective C++ books. I have a function, Analyze, that does some analysis on a range of data. The function is called from a few places with different types of iterators, so I have made it a template (and thus implemented it in a header file). The function depends on a static table of data, AnalysisTable, that I don't want to expose to the rest of the code. My first approach was to make the table a static const inside Analysis. namespace MyNamespace { template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { static const int AnalysisTable[] = { /* data */ }; ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace It appears that the compiler creates a copy of AnalysisTable for each instantiation of Analyze, which is wasteful of space (and, to a small degree, time). So I moved the table outside the function like this: namespace MyNamespace { const int AnalysisTable[] = { /* data */ }; template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace There's only one copy of the table now, but it's exposed to the rest of the code. I'd rather keep this implementation detail hidden, so I introduced an unnamed namespace: namespace MyNamespace { namespace { // unnamed to hide AnalysisTable const int AnalysisTable[] = { /* data */ }; } // unnamed namespace template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace But now I again have multiple copies of the table, because each compilation unit that includes this header file gets its own. If Analyze weren't a template, I could move all the implementation detail out of the header file. But it is a template, so I seem stuck. My next attempt was to put the table in the implementation file and to make an extern declaration within Analyze. // foo.h ------ namespace MyNamespace { template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { extern const int AnalysisTable[]; ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace // foo.cpp ------ #include "foo.h" namespace MyNamespace { const int AnalysisTable[] = { /* data */ }; } This looks like it should work, and--indeed--the compiler is satisfied. The linker, however, complains, "unresolved external symbol AnalysisTable." Drat! (Can someone explain what I'm missing here?) The only thing I could think of was to give the inner namespace a name, declare the table in the header, and provide the actual data in an implementation file: // foo.h ----- namespace MyNamespace { namespace PrivateStuff { extern const int AnalysisTable[]; } // unnamed namespace template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses PrivateStuff::AnalysisTable return result; } } // namespace MyNamespace // foo.cpp ----- #include "foo.h" namespace MyNamespace { namespace PrivateStuff { const int AnalysisTable[] = { /* data */ }; } } Once again, I have exactly one instance of AnalysisTable (yay!), but other parts of the program can access it (boo!). The inner namespace makes it a little clearer that they shouldn't, but it's still possible. Is it possible to have one instance of the table and to move the table beyond the reach of everything but Analyze?

    Read the article

  • CoreData: Same predicate (IN) returns different fetched results after a Save operation

    - by Jason Lee
    I have code below: NSArray *existedTasks = [[TaskBizDB sharedInstance] fetchTasksWatchedByMeOfProject:projectId]; [context save:&error]; existedTasks = [[TaskBizDB sharedInstance] fetchTasksWatchedByMeOfProject:projectId]; NSArray *allTasks = [[TaskBizDB sharedInstance] fetchTasksOfProject:projectId]; First line returns two objects; Second line save the context; Third line returns just one object, which is contained in the 'two objects' above; And the last line returns 6 objects, containing the 'two objects' returned at the first line. The fetch interface works like below: WXModel *model = [WXModel modelWithEntity:NSStringFromClass([WQPKTeamTask class])]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(%@ IN personWatchers) AND (projectId == %d)", currentLoginUser, projectId]; [model setPredicate:predicate]; NSArray *fetchedTasks = [model fetch]; if (fetchedTasks.count == 0) return nil; return fetchedTasks; What confused me is that, with the same fetch request, why return different results just after a save? Here comes more detail: The 'two objects' returned at the first line are: <WQPKTeamTask: 0x1b92fcc0> (entity: WQPKTeamTask; id: 0x1b9300f0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p9> ; data: { projectId = 372004; taskId = 338001; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); } <WQPKTeamTask: 0xf3f6130> (entity: WQPKTeamTask; id: 0xf3cb8d0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p11> ; data: { projectId = 372004; taskId = 340006; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); } And the only one object returned at third line is: <WQPKTeamTask: 0x1b92fcc0> (entity: WQPKTeamTask; id: 0x1b9300f0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p9> ; data: { projectId = 372004; taskId = 338001; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); } Printing description of allTasks: <_PFArray 0xf30b9a0>( <WQPKTeamTask: 0xf3ab9d0> (entity: WQPKTeamTask; id: 0xf3cda40 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p6> ; data: <fault>), <WQPKTeamTask: 0xf315720> (entity: WQPKTeamTask; id: 0xf3c23a0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p7> ; data: <fault>), <WQPKTeamTask: 0xf3a1ed0> (entity: WQPKTeamTask; id: 0xf3cda30 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p8> ; data: <fault>), <WQPKTeamTask: 0x1b92fcc0> (entity: WQPKTeamTask; id: 0x1b9300f0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p9> ; data: { projectId = 372004; taskId = 338001; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); }), <WQPKTeamTask: 0xf325e50> (entity: WQPKTeamTask; id: 0xf343820 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p10> ; data: <fault>), <WQPKTeamTask: 0xf3f6130> (entity: WQPKTeamTask; id: 0xf3cb8d0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p11> ; data: { projectId = 372004; taskId = 340006; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); }) ) UPDATE 1 If I call the same interface fetchTasksWatchedByMeOfProject: in: #pragma mark - NSFetchedResultsController Delegate - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { I will get 'two objects' as well. UPDATE 2 I've tried: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(ANY personWatchers == %@) AND (projectId == %d)", currentLoginUser, projectId]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(ANY personWatchers.personId == %@) AND (projectId == %d)", currentLoginUserId, projectId]; Still the same result. UPDATE 3 I've checked the save:&error, error is nil.

    Read the article

  • Losing data after reading them correct from file

    - by user1388172
    i have the fallowing class of object with a class a data structure which i use in main combined. The ADT(abstract data type) is a linked list. After i read from file the input data and create and object which at print looks just fine after a print. after i push_back() the 3-rd int variable get initializated to 0. So example and code: Example: ex.in: 1 7 31 2 2 2 3 3 3 now i create objects from each line, which at print look as they suppose, but after push_back(): 1 7 0 2 2 0 3 3 0 Class.h: class RAngle { private: int x,y,l,b; public: int solution,prec; RAngle(){ x = y = solution = prec = b = l =0; } RAngle(int i,int j,int k){ x = i; y = j; l = k; solution = 0; prec=0; b=0; } friend ostream& operator << (ostream& out, const RAngle& ra){ out << ra.x << " " << ra.y << " " << ra.l <<endl; return out; } friend istream& operator >>( istream& is, RAngle& ra){ is >> ra.x; is >> ra.y; is >> ra.l; return is ; } }; ADT.h: template <class T> class List { private: struct Elem { T data; Elem* next; }; Elem* first; T pop_front(){ if (first!=NULL) { T aux = first->data; first = first->next; return aux; } T a; return a; } void push_back(T data){ Elem *n = new Elem; n->data = data; n->next = NULL; if (first == NULL) { first = n; return ; } Elem *current; for(current=first;current->next != NULL;current=current->next); current->next = n; } Main.cpp(after i call this function in main which prints object as they suppose to be the x var(from RAngle class) changes to 0 in all cases.) void readData(List <RAngle> &l){ RAngle r; ifstream f_in; f_in.open("ex.in",ios::in); for(int i=0;i<10;++i){ f_in >> r; cout << r; l.push_back(r); }

    Read the article

  • Use of title attribute on div for SEO purpose will help? [duplicate]

    - by Niko Jojo
    This question is an exact duplicate of: Should I set the title attribute for content DIV's to explain what they contain? 1 answer Now a days many images display using css like below : <div title="My Logo" class="all_logo mt15">&nbsp;</div> Above div will show logo image, But as using CSS for logo instead of <img> tag. So not take the benefits of alt tag by SEO point of view. My question is : Does title attribute of <DIV> will help in SEO?

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Should I index my mobile duplicate of my desktop website on google?

    - by Roy
    I have a duplicate of http://he.thenamestork.com with the url http://he.thenamestork.com/mobile - all files are duplicated while the mobile version has a slightly different content. Notice that when I write 'mobile' I only mean regular HTML4 with smartphone friendly CSS. I have a series of redirects (using .htaccess) that allows smartphone users land directly on the mobile versions. But I wonder, shound I index the mobile version as well so those users will be able to get direct, faster links? And what is the proper way of doing that without causing problem in google search? I guess I'm asking if there's a way to get google display regular urls for desktop users and ../mobile/.. urls for smartphone users, and if it is smart SEOwise.

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • How to configure Google sitemap links in Wordpress? (without editing its HTML or PHP source code) [duplicate]

    - by Alexander Farber
    This question already has an answer here: What are the most important things I need to do to encourage Google Sitelinks? 5 answers I run a Wordpress 3.7.1–de_DE site, but don't have much experience with it yet. When my site comes up in a Google search, there are 2 links displayed underneath: I believe these links are called "Google sitemap" and my question is how to configure them in Wordpress. Because while the right link is pointing to the /ueber-mich URL at the website, the left link was pointing to an non-existing /imprint and I had to add that webpage as a workaround for now. And I'd like to change the /imprint to German /impressum anyway (currently I use mod_rewrite to redirect). UPDATE: Dear downvoters and movers, would you mind to READ my question please? My question has been about how to configure Google sitemap links in Wordpress. So it is NOT A DUPLICATE (I do not want to edit the HTML code, I want to find the correct configuration in Wordexpress) and my question SHOULDN'T HAVE BEEN MOVED AWAY from wordexpress.stackexchange.com.

    Read the article

  • Are the contents in the front page considered as duplicate of the post?

    - by yibe
    I asked this same question on stackoverflow, but closed being off topic. Therefore, I am posting it here. In Wordpress blogs, the front page of the blog will display many posts in whole or excerpts. When the link to the post is clicked, the content will be opened with an other template file(single.php). Can we say that the content displayed in the front page and the post pages are considered as duplicate? Does it harm SEO in any way?

    Read the article

  • Do you sign each of your source files with your name? [duplicate]

    - by regularfry
    Possible Duplicate: How do you keep track of the authors of code? One of my colleagues is in the habit of putting his name and email address in the head of each source file he works on, as author metadata. I am not; I prefer to rely on source control to tell me who I should be speaking to about a given set of functionality. Should I also be signing files I work on for any other reasons? Do you? If so, why? To be clear, this is in addition to whatever metadata for copyright and licensing information is included, and applies to both open sourced and proprietary code.

    Read the article

  • CoreData update problems

    - by kpower
    My app makes updates in background thread then saves context changes. And in main context there is a table view that works with NSFetchedResultsController. For some time updates work correctly, but then exception is thrown. To check this I've added NSLog(@"%@", [self.controller fetchedObjects]); to -controllerDidChangeContent:. Here is what I got: "<PRBattle: 0x6d30530> (entity: PRBattle; id: 0x6d319d0 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PRBattle/p2> ; data: {\n battleId = \"-1\";\n finishedAt = \"2012-11-06 11:37:36 +0000\";\n opponent = \"0x6d2f730 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PROpponent/p1>\";\n opponentScore = nil;\n score = nil;\n status = 4;\n})", "<PRBattle: 0x6d306f0> (entity: PRBattle; id: 0x6d319f0 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PRBattle/p1> ; data: {\n battleId = \"-1\";\n finishedAt = \"2012-11-06 11:37:36 +0000\";\n opponent = \"0x6d2ddb0 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PROpponent/p3>\";\n opponentScore = nil;\n score = nil;\n status = 4;\n})", "<PRBattle: 0x6d30830> (entity: PRBattle; id: 0x6d31650 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PRBattle/p11> ; data: <fault>)", "<PRBattle: 0x6d306b0> (entity: PRBattle; id: 0x6d319e0 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PRBattle/p5> ; data: {\n battleId = 325;\n finishedAt = nil;\n opponent = \"0x6d2f730 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PROpponent/p1>\";\n opponentScore = 91;\n score = 59;\n status = 3;\n})", "<PRBattle: 0x6d30730> (entity: PRBattle; id: 0x6d31a00 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PRBattle/p6> ; data: {\n battleId = 323;\n finishedAt = nil;\n opponent = \"0x6d2ddb0 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PROpponent/p3>\";\n opponentScore = 0;\n score = 0;\n status = 3;\n})", "<PRBattle: 0x6d307b0> (entity: PRBattle; id: 0x6d31630 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PRBattle/p9> ; data: {\n battleId = 370;\n finishedAt = \"2012-11-06 14:24:14 +0000\";\n opponent = \"0x79a8e90 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PROpponent/p2>\";\n opponentScore = 180;\n score = 180;\n status = 4;\n})", "<PRBattle: 0x6d307f0> (entity: PRBattle; id: 0x6d31640 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PRBattle/p10> ; data: {\n battleId = 309;\n finishedAt = \"2012-11-02 01:19:27 +0000\";\n opponent = \"0x79a8e90 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PROpponent/p2>\";\n opponentScore = 120;\n score = 240;\n status = 4;\n})", "<PRBattle: 0x6d30770> (entity: PRBattle; id: 0x6d31620 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PRBattle/p7> ; data: {\n battleId = 315;\n finishedAt = \"2012-11-02 02:26:24 +0000\";\n opponent = \"0x79a8e90 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PROpponent/p2>\";\n opponentScore = 119;\n score = 179;\n status = 4;\n})" ) Faulted object (0xe972610) here causes crash. I've logged data during update & before saving. This object is in updatedObjects only. Why can this method return "bad" object? (Moreover, during updates this object is affected almost each update. And only after some passes becomes "bad" one). P.S.: I use RestKit to manage CoreData. UPDATED: The exception was got, when I did smth. like this: for (PRBattle *battle in [self.controller fetchedObjects) { switch (battle.statusScalar) { case ... default: [battle willAccessValueForKey:nil]; NSAssert1(NO, @"Unexpected battle status found: %@", battle); } } The exception is on line with -willAccessValueForKey:. Scalar status for battle is enum, that is bind to integer values 1..4. I've mentioned all possible values in switch's cases (above default:). And the last one has break;. So this one is possible only when battle.statusScalar returns non-enum value. Status scalar implementation in PRBattle: - (PRBattleStatuses)statusScalar { [self willAccessValueForKey:@"statusScalar"]; PRBattleStatuses result = (PRBattleStatuses)[self.status integerValue]; [self didAccessValueForKey:@"statusScalar"]; return result; } And battle.status has validation rules: - min-value: 1 - max-value: 4 - default: no value And the last thing - debug log: objc[4664]: EXCEPTIONS: throwing 0x7d33f80 (object 0xe67d2a0, a _NSCoreDataException) objc[4664]: EXCEPTIONS: searching through frame [ip=0x97b401 sp=0xbfffd9b0] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: catch(id) objc[4664]: EXCEPTIONS: unwinding through frame [ip=0x97b401 sp=0xbfffd9b0] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: handling exception 0x7d33f60 at 0x97b79f objc[4664]: EXCEPTIONS: rethrowing current exception objc[4664]: EXCEPTIONS: searching through frame [ip=0x97b911 sp=0xbfffd9b0] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: searching through frame [ip=0x9ac8b7 sp=0xbfffdc20] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: searching through frame [ip=0x97ee80 sp=0xbfffdc40] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: searching through frame [ip=0x361d0 sp=0xbfffdc70] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: searching through frame [ip=0xa701d8 sp=0xbfffde10] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: catch(id) objc[4664]: EXCEPTIONS: unwinding through frame [ip=0x97b911 sp=0xbfffd9b0] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: finishing handler objc[4664]: EXCEPTIONS: searching through frame [ip=0x97b963 sp=0xbfffd9b0] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: searching through frame [ip=0x9ac8b7 sp=0xbfffdc20] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: searching through frame [ip=0x97ee80 sp=0xbfffdc40] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: searching through frame [ip=0x361d0 sp=0xbfffdc70] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: searching through frame [ip=0xa701d8 sp=0xbfffde10] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: catch(id) objc[4664]: EXCEPTIONS: unwinding through frame [ip=0x97b963 sp=0xbfffd9b0] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: unwinding through frame [ip=0x9ac8b7 sp=0xbfffdc20] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: unwinding through frame [ip=0x97ee80 sp=0xbfffdc40] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: unwinding through frame [ip=0x361d0 sp=0xbfffdc70] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: unwinding through frame [ip=0x3656f sp=0xbfffdc70] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: unwinding through frame [ip=0xa701d8 sp=0xbfffde10] for exception 0x7d33f60 objc[4664]: EXCEPTIONS: handling exception 0x7d33f60 at 0xa701f5 2012-11-07 13:37:55.463 TestApp[4664:fb03] CoreData: error: Serious application error. An exception was caught from the delegate of NSFetchedResultsController during a call to -controllerDidChangeContent:. CoreData could not fulfill a fault for '0x6d31650 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PRBattle/p10>' with userInfo { NSAffectedObjectsErrorKey = ( "<PRBattle: 0x6d30830> (entity: PRBattle; id: 0x6d31650 <x-coredata://882BD521-90CD-4682-B19A-000A4976E471/PRBattle/p10> ; data: <fault>)" ); }

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >