Search Results

Search found 3604 results on 145 pages for 'steve prior'.

Page 57/145 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • F# in 90 Seconds

    - by Ben Griswold
    I mentioned in a previous post that we’ve started a languages club at the office.  In an effort to decide which language we will first concentrate on, I volunteered to give the rundown on F#.  Rather than providing a summary here, I’ve provided my slide deck for your viewing enjoyment.  There’s nothing special here outside of a some pretty cool characters from The 56 Geeks Project by Scott Johnson and collection of information from my prior functional programming presentations.   Download F# in 90 Seconds

    Read the article

  • New regular expression features in PCRE 8.34 and 8.35

    - by Jan Goyvaerts
    PCRE 8.34 adds some new regex features and changes the behavior of a few to make it better compatible with the latest versions of Perl. There are no changes to the regex syntax in PCRE 8.35. \o{377} is now an octal escape just like \377. This syntax was first introduced in Perl 5.12. It avoids any confusion between octal escapes and backreferences. It also allows octal numbers beyond 377 to be used. E.g. \o{400} is the same as \x{100}. If you have any reason to use octal escapes instead of hexadecimal escapes then you should definitely use the new syntax. Because of this change, \o is now an error when it doesn’t form a valid octal escape. Previously \o was a literal o and \o{377} was a sequence of 337 o‘s. In free-spacing mode, whitespace between a quantifier and the ? that makes it lazy or the + that makes it possessive is now ignored. In Perl this has always been the case. In PCRE 8.33 and prior, whitespace ended a quantifier and any following ? or + was seen as a second quantifier and thus an error. The shorthand \s now matches the vertical tab character in addition to the other whitespace characters it previously matched. Perl 5.18 made the same change. Many other regex flavors have always included the vertical tab in \s, just like POSIX has always included it in [[:space:]]. Names of capturing groups are no longer allowed to start with a digit. This has always been the case in Perl since named groups were added to Perl 5.10. PCRE 8.33 and prior even allowed group names to consist entirely of digits. [[:<:]] and [[::]] are now treated as POSIX-style word boundaries. They match at the start and the end of a word. Though they use similar syntax, these have nothing to do with POSIX character classes and cannot be used inside character classes. Perl does not support POSIX word boundaries. The same changes affect PHP 5.5.10 (and later) and R 3.0.3 (and later) as they have been updated to use PCRE 8.34. RegexBuddy and RegexMagic have been updated to support the latest versions of PCRE, PHP, and R. Older versions that were previously supported are still supported, so you can compare or convert your regular expressions between the latest versions of PCRE, PHP, and R and whichever version you were using previously.

    Read the article

  • ODUG lands DotNetNuke guru Nik Kalyani as a speaker

    - by Brian Scarbeau
    If you are in the Orlando, FL area during the first week of May then you should head over to the Orlando DotNetNuke user group meeting. Nik Kalyani will be the speaker and you will learn a great deal from him. DotNetNuke Module Development with the MVP Pattern This session focuses on introducing attendees to the Model-View-Presenter pattern, support for which was recently introduced in the DotNetNuke Core. We'll start with a quick overview of the pattern, compare it to MVC, and then dive right into code. We will start with fundamentals and then develop a full-featured module using this pattern. In order to do justice to the pattern, we will use ASP.NET WebForms controls minimally and implement most of the UI using jQuery plug-ins. Finally, to increase audience participation (both present at the meeting and remote), we will use a hackathon-style model and allow anybody, anywhere to follow along with the presentation and code their own MVP-based solution that they can share online during or after the session. A URL with full instructions for the hackathon will be posted online a few days prior to the meeting. About Our Speaker Nik Kalyani is Co-founder and Strategic Advisor for DotNetNuke Corp., the company that manages the DotNetNuke Open Source project. Kalyani is also Founder and CEO of HyperCrunch. He is a technology entrepreneur with over 18 years of experience in the software industry. He is irrationally exuberant about technology, especially if it has anything to do with the Internet. HyperCrunch is his latest startup business that builds on his knowledge and experience from prior startups, two of them venture-funded. Kalyani is a creative tinkerer perpetually interested in looking around the corner and figuring out new and interesting ways to make the world a better place. An experienced web developer, he finds the business strategy and marketing aspects of the software business more exciting than writing code. When he does create software, his primary expertise is in creating products with compelling user experiences. Kalyani is most proficient with Microsoft technologies, but has no religious fanaticism about them. Kalyani has a bachelor’s degree in computer science from Western Michigan University. He is a frequent speaker at technology conferences and user group meetings. He lives in Mountain View, California with his wife and daughters. He blogs at http://www.kalyani.com and is @techbubble on Twitter.

    Read the article

  • What are programmers made to do in spare time in jobs?

    - by Shashank Jain
    Well, with no prior job experience I am completely ignorant of how things happen at software companies. I want to know what programmers are made to do when there is nothing to do? Lets consider Facebook or twitter. Now it is quite improbable that Facebook people have something or other feature in mind to be implemented. So software developers are quite expected to have some time when there is nothing to do. Are they free to do anything in this?

    Read the article

  • Difference between Windows and Linux development environments?

    - by Ryan
    I have an interview coming up soon for a Business Analyst position and the recruiter mentioned some feedback from a prior candidate that was interviewed who said the interviewers asked him what the difference between a Windows and Linux development environment was. Are there some high level things I need to be aware of from a business point of view when working with a development team or designing an application on Windows vs Linux?

    Read the article

  • Fastest Functional Language

    - by Farouk
    I've recently been delving into functional programming especially Haskell and F#, the prior more so. After some googling around I could not find a benchmark comparison of the more prominent functional languages (Scala,F# etc). I know it's not necessarily fair to some of the languages (Scala comes to mind) given that they are hybrids, but I just wanna know which outperforms which on what operations and overall.

    Read the article

  • Customized Database Listener Names Now Supported for EBS

    - by sreelatha.mahendra(at)oracle.com
    The database listener name can now be configured using AutoConfig with newly introduced context variable s_db_listener. Prior to this certification it was not possible to use AutoConfig generated listener.ora files for managing listeners from SRVCTL when there were multiple RAC instances on the same server.To use this feature E-Business Suite customers need to apply the following patch:11.5.10CU2 - Roll Up Patch 9535311 (RUP-U) or higher12.0.x - R12.TXK.A.delta.7 or higher 12.1.x - R12.TXK.B.delta 3 or higher

    Read the article

  • New Process For Receiving Oracle Certification Exam Results

    - by Brandye Barrington
    On November 15, 2012, Oracle Certification exam results will be available directly from Oracle's certification portal, CertView. After completing an exam at a testing center, you will login to CertView to access and print your exam scores by selecting the See My New Exam Results Now link or the Print My New Exam Results Now link from the homepage. This will provide access to all certification and exam history in one place through Oracle, providing tighter integration with other activities at Oracle. This change in policy will also increase security around data privacy. AUTHENTICATE YOUR CERTVIEW ACCOUNT NOW One very important step you must take is to authenticate your CertView account BEFORE taking your exam. This way, if there are any issues with authorization, you have time to get these sorted out before testing. Keep in mind that it can take up to 3 business days for a CertView account to be manually authenticated, so completing this process before testing is key! You will need to create a web account at PearsonVUE prior to registering for your exam and you will need to create an Oracle Web Account prior to authenticating your CertView account. The CertView account will be available for authentication within 30 minutes of creating a Pearson VUE web account at certview.oracle.com. GETTING YOUR EXAM RESULTS FROM ORACLE Before taking the scheduled exam, you should authenticate your account at certview.oracle.com using the email address and Oracle Testing ID in your Pearson VUE profile. You will be required to have an Oracle Web Account to authenticate your CertView account. After taking the exam, you will receive an email from Oracle indicating that your exam results are available at certview.oracle.com If you have previously authenticated your CertView account, you will simply click on the link in the email, which will take you to CertView, login and select See My New Exam Results Now. If you have not authenticated your CertView account before receiving this notification email, you will be required to authenticate your CertView account before accessing your exam results. Authentication requires an Oracle Web Account user name and password and the following information from your Pearson VUE profile: email address and Oracle Testing ID. Click on the link in the email to authenticate your CertView account You will be given the option to create an Oracle Web Account if you do no already have one.  After account authentication, you will be able to login to CertView and select See My New Exam Results Now to view your exam results or Print My New Exam Results Now to print your exam results. As always, if you need assistance with your CertView account, please contact Oracle Certification Support. YOUR QUESTIONS ANSWERED More Information FAQ: Receiving Exam Scores FAQ: How Do I Log Into CertView? FAQ: How To Get Exam Results FAQ: Accessing Exam Results in CertView FAQ: How Will I Know When My Exam Results Are Available? FAQ: What If I Don't Get An Exam Results Email Alert? FAQ: How To Download and Print Exam Score Reports FAQ: What If I Think My Exam Results Are Wrong In CertView? FAQ: Is Oracle Changing The Way That Exams Are Scored?

    Read the article

  • Iterative and Incremental Principle Series 3: The Implementation Plan (a.k.a The Fitness Plan)

    - by llowitz
    Welcome back to the Iterative and Incremental Blog series.  Yesterday, I demonstrated how shorter interval sets allowed me to focus on my fitness goals and achieve success.  Likewise, in a project setting, shorter milestones allow the project team to maintain focus and experience a sense of accomplishment throughout the project lifecycle.  Today, I will discuss project planning and how to effectively plan your iterations. Admittedly, there is more to applying the iterative and incremental principle than breaking long durations into multiple, shorter ones.  In order to effectively apply the iterative and incremental approach, one should start by creating an implementation plan.   In a project setting, the Implementation Plan is a high level plan that focuses on milestones, objectives, and the number of iterations.  It is the plan that is typically developed at the start of an engagement identifying the project phases and milestones.  When the iterative and incremental principle is applied, the Implementation Plan also identified the number of iterations planned for each phase.  The implementation plan does not include the detailed plan for the iterations, as this detail is determined prior to each iteration start during Iteration Planning.  An individual iteration plan is created for each project iteration. For my fitness regime, I also created an “Implementation Plan” for my weekly exercise.   My high level plan included exercising 6 days a week, and since I cross train, trying not to repeat the same exercise two days in a row.  Because running on the hills outside is the most difficult and consequently, the most effective exercise, my implementation plan includes running outside at least 2 times a week.   Regardless of the exercise selected, I always apply a series of 6-minute interval sets.  I never plan what I will do each day in advance because there are too many changing factors that need to be considered before that level of detail is determined.  If my Implementation Plan included details on the exercise I was to perform each day of the week, it is quite certain that I would be unable to follow my plan to that level.  It is unrealistic to plan each day of the week without considering the unique circumstances at that time.  For example, what is the weather?  Are there are conflicting schedule commitments?  Are there injuries that need to be considered?  Likewise, in a project setting, it is best to plan for the iteration details prior to its start. Join me for tomorrow’s blog where I will discuss when and how to plan the details of your iterations.

    Read the article

  • Camera rotation flicker in OpenGL ES 2.0

    - by seahorse
    I implemented an orbit camera in my own OpenGL ES 2.0 application. I was getting extensive amount of flicker while rotating the camera using the mouse. When I added the line eglSwapInterval( ..., 0.1); after eglSwapBuffers() and then the flicker immediately stopped. I am not able to understand why eglSwapInterval solves the flicker problem? (The FPS of my app prior to eglSwapInterval was around 700FPS) (The flicker is NOT due to z-fighting because I have set near and far clip planes as 100 and 500)

    Read the article

  • Visual Studio 11 Release Candidate now available

    - by TATWORTH
    Microsoft have released Visual Studio 11 RC at http://www.microsoft.com/visualstudio/11/en-us/downloads#vsThis is a free download!"The available Visual Studio 2012 RC products and localizations are pre-release versions of the next release of Visual Studio. As such, they are not final and are subject to change prior to release. Before installing this pre-release software, you should review the associated readme files for system requirements and a list of known issues: "You should read the read me for VS11 RC

    Read the article

  • Extending WikiPlex with Scope Augmenters

    - by mhawley
    [In addition to blogging, I am also using Twitter. Follow me: @matthawley] Another extension point with WikiPlex is Scope Augmenters. Scope Augmenters allow you to post process the collection of scopes to further augment, or insert/remove, new scopes prior to being rendered. WikiPlex comes with 3 out-of-the-box Scope Augmenters that it uses for indentation, tables, and lists. For reference, I'll be explaining… (read more)

    Read the article

  • Warning: E-Business Suite Issues with Sun JRE 1.6.0_19

    - by Steven Chan
    Sadly, the issues reported in the following article also apply to JRE 1.6.0_19:Warning: E-Business Suite Issues with Sun JRE 1.6.0_18Once again, if you haven't already upgraded your end-users to JRE 1.6.0_18 or 1.6.0_19, we recommend that you to keep them on a prior JRE release such as 1.6.0_17 (6u17).We're working closely with the Sun JRE team to get this issue resolved as quickly as possible.  Please monitor this blog for updates.

    Read the article

  • Tips On Using The Service Contracts Import Program

    - by LuciaC
    Prior to release 12.1 there was no supported way to import contracts into the EBS Service Contracts application - there were no public APIs nor contract load programs provided.  From release 12.1 onwards the 'Service Contracts Import Program' is provided to load service contracts into the application. The Service Contracts Import functionality is explained in How to Use the Service Contracts Import Program - Scope and Limitations (Doc ID 1057242.1).  This note includes an attached document which explains the program architecture, shows the Entity Relationship Diagram and details the interface table definitions. The Import program takes data from the interface tables listed below and populates the contracts schema tables:  OKS_USAGE_COUNTERS_INTERFACE OKS_SALES_CREDITS_INTERFACEOKS_NOTES_INTERFACEOKS_LINES_INTERFACEOKS_HEADERS_INTERFACEOKS_COVERED_LEVELS_INTERFACEThese interface tables must be loaded via a custom load program.The Service Contracts Import concurrent request is then submitted to create contracts from this legacy data. The parameters to run the Import program are:  Parameter Description  Mode Validate only, Import  Batch Number Batch_Id (unique id populated into the OKS_HEADERS_INTERFACE table)  Number of Workers Number of workers required (these are spawned as separate sub-requests)  Commit size Represents number of successfully processed contracts commited to database The program spawns sub-requests for the import worker(s) and the 'Service Contracts Import Report'.  The data is validated prior to import and into the Contracts tables and will report errors in the Service Contracts Import Report program output file (Import Execution Report).  Troubleshooting tips are provided in R12.1 - Common Service Contract Import Errors (Doc ID 762545.1); this document lists some, but not all, import errors.  The document will be updated over time.  Additional help is given in Debugging Tip for Service Contracts Import Errors (Doc ID 971426.1).After you successfully import contracts, you can purge the records from the interface tables by running the Service Contracts Import Purge concurrent program. Note that there is no supported way to mass delete data from the Contracts schema tables once they are populated, so data loaded by the Import program must be fully tested and verified before the program is run to load data into a Production system.A Service Contracts Import Test program has been provided which will take an existing contract in the application and load the interface tables using the data from that contract.  This can be used as an example for guidance on how to load the interface tables.  The Test program functionality is explained in How to Use the Service Contracts Test Import Program Provided in Release 12.1 (Doc ID 761209.1).  Note that the Test program has some limitations which do not apply to the full Import program and is not a supported program, it is simply a testing tool.  

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • IMPORTANT - FY13 OPN Incentive Program VAD Webcast - June 21st @ 4PM GMT

    - by Cinzia Mascanzoni
    Please mark your calendars for the FY13 OPN Incentive Program update webcast on June 21. The objective of this call is to share the updates to the OPN Incentive Program for FY13 with you. Thursday, June 21st @: 4:00 PM GMT : 5PM CET Click here for the details of the webcast. Please plan to call in 5-10 minutes prior to the start to avoid delays. We look forward to your participation on this call.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • How can I replicate Google Page Speed's lossless image compression as part of my workflow?

    - by Keefer
    I love that Google's Page Speed is able to losslessly compress a lot of my images, but I'd love to make it part of my workflow, prior to uploading a site and making it live. Is there anything I can run locally to give me the same lossless compression? I currently export images from Export For Web from Photoshop, and use a little application called PNGCrusher to reduce file size of PNGs. I'd love to find a faster way though than saving out and replacing the individual images from Page Speed's results.

    Read the article

  • Proactive Reputation Management and Your SEO-SEM Company

    Reputation management is often seen as necessary only when a negative publicity attack is under way. While working with an accomplished reputation management company in such circumstances can counter an attack and minimize potential damage, the best results are actually seen when companies start working with a company that will both build and protect their reputation prior to any kind of attack.

    Read the article

  • Determining cause of random latency and loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of CSS and JS files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well.

    Read the article

  • Business Insight, IT Execution: 9 Project Management Tips

    - by Sylvie MacKenzie, PMP
    Excerpt from Profit Magazine - by David Rosenbaum When Marcos Baccetto was first asked to be the business-side project lead on Eaton Corporation’s Vehicle Group South America (VGSA) Oracle project, the operations services manager responsible for running manufacturing was, he confesses, “a little afraid” because of his lack of IT experience. Today, Baccetto calls the project “a fantastic experience,” and he is a true believer in the benefits of a close relationship between IT implementers and their line-of-business peers. Through his partnership with Jesiele Lima, then VGSA IT manager, Baccetto and Eaton’s South American operations team came to understand several important principles of business and IT. Here he shares nine tips managers should consider when working on an enterprise technology project. 1. Make it a business project, not an IT project. All levels of functional management must have ownership, responsibility, and accountability for the success of the implementation. 2. Share responsibility. Business owners should sign off on tests and data conversion. 3. Clean your data. Dedicating a team to improve core data quality prior to project launch can be a significant time-saver. 4. Select resources properly. Have functional people who can translate business needs to IT and can influence organizational change. 5. Manage scope. Follow project management methodologies and disciplines. 6. Adopt common processes, global solutions. Avoid customized, local solutions. The big-picture business goals can get lost in the details. 7. Implement processes prior to the go-live date. Change management can be key. Keep the workforce informed and train users in advance. 8. Define metrics milestones. Assume there will be a crisis during deployment. Having baseline metrics to compare against will help implementers keep their cool—and the project moving forward. 9. The sponsor’s commitment is critical. It is needed to support the truly difficult decisions.

    Read the article

  • General Policies and Procedures for Maintaining the Value of Data Assets

    Here is a general list for policies and procedures regarding maintaining the value of data assets. Data Backup Policies and Procedures Backups are very important when dealing with data because there is always the chance of losing data due to faulty hardware or a user activity. So the need for a strategic backup system should be mandatory for all companies. This being said, in the real world some companies that I have worked for do not really have a good data backup plan. Typically when companies tend to take this kind of approach in data backups usually the data is not really recoverable.  Unfortunately when companies do not regularly test their backup plans they get a false sense of security because they think that they are covered. However, I can tell you from personal and professional experience that a backup plan/system is never fully implemented until it is regularly tested prior to the time when it actually needs to be used. Disaster Recovery Plan Expanding on Backup Policies and Procedures, a company needs to also have a disaster recovery plan in order to protect its data in case of a catastrophic disaster.  Disaster recovery plans typically encompass how to restore all of a company’s data and infrastructure back to a restored operational status.  Most Disaster recovery plans also include time estimates on how long each step of the disaster recovery plan should take to be executed.  It is important to note that disaster recovery plans are never fully implemented until they have been tested just like backup plans. Disaster recovery plans should be tested regularly so that the business can be confident in not losing any or minimal data due to a catastrophic disaster. Firewall Policies and Content Filters One way companies can protect their data is by using a firewall to separate their internal network from the outside. Firewalls allow for enabling or disabling network access as data passes through it by applying various defined restrictions. Furthermore firewalls can also be used to prevent access from the internal network to the outside by these same factors. Common Firewall Restrictions Destination/Sender IP Address Destination/Sender Host Names Domain Names Network Ports Companies can also desire to restrict what their network user’s view on the internet through things like content filters. Content filters allow a company to track what webpages a person has accessed and can also restrict user’s access based on established rules set up in the content filter. This device and/or software can block access to domains or specific URLs based on a few factors. Common Content Filter Criteria Known malicious sites Specific Page Content Page Content Theme  Anti-Virus/Mal-ware Polices Fortunately, most companies utilize antivirus programs on all computers and servers for good reason, virus have been known to do the following: Corrupt/Invalidate Data, Destroy Data, and Steal Data. Anti-Virus applications are a great way to prevent any malicious application from being able to gain access to a company’s data.  However, anti-virus programs must be constantly updated because new viruses are always being created, and the anti-virus vendors need to distribute updates to their applications so that they can catch and remove them. Data Validation Policies and Procedures Data validation is very important to ensure that only accurate information is stored. The existence of invalid data can cause major problems when businesses attempt to use data for knowledge based decisions and for performance reporting. Data Scrubbing Policies and Procedures Data scrubbing is valuable to companies in one of two ways. The first can be used to clean data prior to being analyzed for report generation. The second is that it allows companies to remove things like personally Identifiable information from its data prior to transmit it between multiple environments or if the information is sent to an external location. An example of this can be seen with medical records in regards to HIPPA laws that prohibit the storage of specific personal and medical information. Additionally, I have professionally run in to a scenario where the Canadian government does not allow any Canadian’s personal information to be stored on a server not located in Canada. Encryption Practices The use of encryption is very valuable when a company needs to any personal information. This allows users with the appropriated access levels to view or confirm the existence or accuracy of data within a system by either decrypting the information or encrypting a piece of data and comparing it to the stored version.  Additionally, if for some unforeseen reason the data got in to the wrong hands then they would have to first decrypt the data before they could even be able to read it. Encryption just adds and additional layer of protection around data itself. Standard Normalization Practices The use of standard data normalization practices is very important when dealing with data because it can prevent allot of potential issues by eliminating the potential for unnecessary data duplication. Issues caused by data duplication include excess use of data storage, increased chance for invalidated data, and over use of data processing. Network and Database Security/Access Policies Every company has some form of network/data access policy even if they have none. These policies help secure data from being seen by inappropriate users along with preventing the data from being updated or deleted by users. In addition, without a good security policy there is a large potential for data to be corrupted by unassuming users or even stolen. Data Storage Policies Data storage polices are very important depending on how they are implemented especially when a company is trying to utilize them in conjunction with other policies like Data Backups. I have worked at companies where all network user folders are constantly backed up, and if a user wanted to ensure the existence of a piece of data in the form of a file then they had to store that file in their network folder. Conversely, I have also worked in places where when a user logs on or off of the network there entire user profile is backed up. Training Policies One of the biggest ways to prevent data loss and ensure that data will remain a company asset is through training. The practice of properly train employees on how to work with in systems that access data is crucial when trying to ensure a company’s data will remain an asset. Users need to be trained on how to manipulate a company’s data in order to perform their tasks to reduce the chances of invalidating data.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >