Search Results

Search found 498 results on 20 pages for 'quicker'.

Page 3/20 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • HP's Linux OS Alternative Gets a Face Lift

    <b>ServerWatch:</b> "In contrast, IBM and HP, the other two big enterprise UNIX players, have been plodding along steadily, hoping all the while to pick up disaffected Sun customers quicker than they lose their own to Linux implementations."

    Read the article

  • How to convert a .pdf file into a folder of images?

    - by Shawn
    I have some .pdf files that I would like to convert to my preferred reading format of .cbr or .cbz or, if this isn't directly possible, I need to extract all pages from the .pdf as images and then compress them into my format of choice. I have only been able to save pages one at a time with Document Viewer. Obviously, I'd like to do it a little quicker. I have tried pdfsam, pdf shuffler, and pdfmod all with no luck. I am using Ubuntu 11.10.

    Read the article

  • RegEx-Based Finding and Replacing of Text in SSMS

    So often, one sees developers doing repetitive coding in SQL Server Management Studio or Visual Studio that could be made much quicker and easier by using the Regular-Expression-based Find/Replace functionality. It is understandable, since the syntax is odd and some features are missing, but it is still worth knowing about. The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • The Benefits of Website Creation Software

    Building your own website can be time consuming and costly if you are not technically inclined. But building one or more websites using site building software is the easy, quick and efficient way to do it for quicker profits.

    Read the article

  • Web app to take screen shot of website and annotate?

    - by Anagio
    Does anyone know of a website that will take a full screen shot of another website and let users write notes over it then send that annotated photo by email or private link? Basically looking for quicker way to write notes on a site other than taking a screen shot my self and putting it into photoshop. Just an update, I don't need any browser extensions which I already have and do the same. I'm looking for a website app which does this to give to a client.

    Read the article

  • Three Things For Good SEO in Article Writing to Earn Money Online

    Article marketing is becoming more popular every day. This is a tactic that is used by a lot of professionals as it can get you a high page rank quicker and much more effectively get high traffic within a short time period. If you are looking to utilize this strategy, you need to understand that it is not only great content that will get you there, but also implementing a great optimizing strategy that will get you higher in searches. These are the steps to move quickly!

    Read the article

  • Is there a quick way to enter uppercase accented characters?

    - by agnul
    I know I can enter any character I need using the character-map applet, but that means grabbing the mouse/opening the "run" dialog, switching to another window, selecting the character, copying... you get the idea. Is there a quicker way to enter characters that are not available on the keyboard (and are not available as alt-gr combinations)? In my specific case I'm using a laptop with the Italian keyboard layout, but I suspect a general solution exists.

    Read the article

  • SSIS Field Notes – SQLBits 7 Presentation

    Here are the slides from my session SSIS Field Notes presented at SQLBits 7 in York earlier this month - SSIS Field Notes – Darren Green.pptx On a similar theme, the video of my session Design patterns for SSIS Performance from is now available. You heard it here first! I know that this because I’ve only just finished updating the SQLBits site with all the videos from SQLBits 6. Hopefully we’ll get them released quicker for SQLBits 7.

    Read the article

  • Does Wordpress have any SEO benifits on sites coded from scratch? [closed]

    - by Glycan
    Possible Duplicate: Do wordpress websites get indexed quicker by SE than a regular website? I have heard SEO experts suggest moving sites to Wordpress for SEO optimization. Googling offers some perspectives that confirm this (like this or somewhat more dubiously, this). Is this accurate? Some other places say that it doesn't matter, except that Wordpress configures all the meta correctly. In that case, what exactly must be configured?

    Read the article

  • Best Website Development Plan

    With increased competition in the present world scenario, top class website development plan is no more the developer's sole task. It is in fact the website owner's competitive spirit that defines the business prospect on the internet. Site owners are getting more demanding in their needs. Studies have shown that a discerning website owner, who emphasizes on being unique and in standing out from the rest, is most often the one to taste a quicker sure-shot-success.

    Read the article

  • How to improve UI development skills (for a Java developer)?

    - by bluetech
    I have worked on backend development with mostly Java. For past 6 months I have been working on UI a lot and I want to improve my skills. I am aware of HTML, CSS and JavaScript (also jQuery and YUI) but I have never been able to master them so that I can develop efficient and maintainable solutions much quicker than how I do now. Can other UI developers give me any tips/resources? I also wanted to learn about patterns and best practices for UI development.

    Read the article

  • CNC Information - Data Storage and Transfer

    A CNC machine must be tried when there is need to improve speed and accuracy. The machine performs better in doing repetitive tasks and getting large jobs done quicker. Woodworking shops or industria... [Author: Scheygen Smith - Computers and Internet - March 21, 2010]

    Read the article

  • 5 Strategic Reasons For Website Optimization

    Website optimization, also known as search engine optimization (SEO) helps your loyal customers discover your other products and services online, quicker and with much less effort. Moreover, website optimization is a proven way to find new customers who are right now searching for your type of products or services.

    Read the article

  • About Custom Web Development

    Custom web coding lets service providers to create a highly targeted online surfing session for possible customers. A web administrator which can offer a personal surfing experience must find it much quicker to get repeat consumers and promote future business. Let us face it, not all webpages are made equally.

    Read the article

  • Harness the Power of Sitemaps to Solidify Your Website's Structure

    Nothing frustrates an online user more than browsing through a website which takes ages to load, or one that looks cluttered and disorganized. Rather than painstakingly going through the pages of your site to find the information that they are looking for, they would much rather leave and look for another site which is quicker to load and is more user-friendly. This is precisely the reason why as a website owner, you need to make sure that your site has a solid structure.

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • LINQ to DataSet Dataclass assignment question

    - by Overhed
    Hi all, I'm working on a Silverlight project trying to access a database using LINQ To DataSet and then sending data over to Silverlight via .ASMX web service. I've defined my DataSet using the Server Explorer tool (dragging and dropping all the different tables that I'm interested in). The DataSet is able to access the server and database with no issues. Below is code from one of my Web Methods: public List<ClassSpecification> getSpecifications() { DataSet2TableAdapters.SpecificationTableAdapter Sta = new DataSet2TableAdapters.SpecificationTableAdapter(); return (from Spec in Sta.GetData().AsEnumerable() select new ClassSpecification() { Specification = Spec.Field<String>("Specification"), SpecificationType = Spec.Field<string>("SpecificationType"), StatusChange = Spec.Field<DateTime>("StatusChange"), Spec = Spec.Field<int>("Spec") }).ToList<ClassSpecification>(); } I created a "ClassSpecification" data class which is going to contain my data and it has all the table fields as properties. My question is, is there a quicker way of doing the assignment than what is shown here? There are actually about 10 more fields, and I would imagine that since my DataSet knows my table definition, that I would have a quicker way of doing the assignment than going field by field. I tried just "select new ClassSpecification()).ToList Any help would be greatly appreciated.

    Read the article

  • php string versus boolean speed test

    - by ae
    I'm looking at trying to optimise a particular function in a php app and foolishly assumed that a boolean lookup in a 'if' statement would be quicker than a string compare. But to check it I put together a short test (see below). To my surprise, the string lookup was quicker. Is there anything wrong with my test (I'm wired on too much coffee so I'm suspicious of my own code)? If not, I would be interested in any comments people have around string versus boolean lookups in php. The result for the first test (boolean lookup) was 0.168 The result for the second test (string lookup) was 0.005 <?php $how_many = 1000000; $counter1 = 0; $counter2 = 0; $abc = array('boolean_lookup'=>TRUE, 'string_lookup'=>'something_else'); $start = microtime(); for($i = 0; $i < $how_many; $i++) { if($abc['boolean_lookup']) { $counter1++; } } echo ($start - microtime()); echo '<hr>'; $start = microtime(); for($i = 0; $i < $how_many; $i++) { if($abc['string_lookup'] == 'something_else') { $counter2++; } } echo ($start - microtime());

    Read the article

  • Align 3 images in a div , left-centre-right, uneven margin

    - by Adrian
    I could find a work around for this if I wanted but it seems wrong and am trying to learn to code in a neater way. Basically I have a div with 3 images in it, the div is 700px, and each image is 220px, So thats 660px with two 20px gaps left and right of the centre image, and the outside images going all the way to their end of the div. Is there a quicker way of doing this without setting up seperate ids for each image? .contentpictureblock { float:left; } .contentpictureblock img { margin-right:20px; } <div class="contentpictureblock"> <img src="http://..."> <img src="http://..."> <img src="http://..."> </div> Doing the above^ pushes the third image to the next line, which is understandable. I know I could always make seperate divs for each image, and adjust the margins for each one but Im just wondering is there a quicker one off overflow type command that I could apply to the above? It would mean the right margin would be on all the images but would have no effect on its positioning in the last image. Thanks for the help.

    Read the article

  • individual license to volume license

    - by Carl
    Hello, my company has about 100 computer each using individual licenses for MS Office and Windows. We are looking to upgrade about 50 computers. We, in the past, have used Dell as our main supplier of PCs and would like to continue with them. What would be the best route for us in upgrading these 50 computers. We were thinking volume licensing for quicker setup and configuration. Thanks, Carl

    Read the article

  • LinkedIn Slow on Windows XP with IE 8

    - by Kanini
    On my desktop, which runs Windows XP SP2 with IE 8, LinkedIn takes ages to load. However, with Chrome, it loads comparitively quicker. However, on my laptop which runs Windows Vista with IE8, it is awesomely quick! What could be the reasons?

    Read the article

  • How do I install the LargeFile plugin for gVim?

    - by drapkin11
    I want to install and use the famed LargeFile plugin to help make opening large files quicker and easier. I've installed other plugins for gVim successfully, by simply placing the '.vim' file in the /plugins folder. But this plugin is a little different, in that it comes as a gzipped '.vba' file. After I unzip it, what do I do next? As I understand it, the same author created vimballPlugin.vim, which is supposed to execute it?

    Read the article

  • Which is faster, copying everything at once or one thing at a time?

    - by fredley
    I am transferring a bunch (20+) of large (1GB+) files to my external flash drive over USB 2.0. Is it quicker to just sling them all over at once (as in one at a time but not waiting for the previous transfer to finish) so that there are multiple transfers going on, or transfer one, wait for it to finish, transfer the next. The files are coming from a variety of locations so I can't do one single big transfer. Are there any other advantages to one way or the other that are worth considering?

    Read the article

  • Pushing out application packages with SCCM 2007 quickly

    - by JohnyV
    When you set an application to install from by creating an advertisment, is there a way to force the process to occur quicker? I know that on the client you can initiate the machine policy and the user policy but I dont want to have to touch the client machine. Is there a settings that has a value for wait time etc? Thanks

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >