Search Results

Search found 82077 results on 3284 pages for 'better at work'.

Page 189/3284 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • which mail server is better suited for high volume? [closed]

    - by crashintoty
    I'm planning out a project (web/mobile app) that would require a mail server that could handle hundreds of thousands connections per hour (both IMAP/POP and SMTP) and has the ability to interface with PHP (or python or whatever) to dynamically create, delete and check for mailboxes? This is not for spam stuff, I just need my app to generate random mailboxes (and static/permanent ones too) to receive mail and process it for items listed on my service. The little research I've done so far has turned up courier, dovecot, cyrus and haraka. I think the ability to scale and/or load balance (I'm new to these terms, pardon me) would also be a requirement. Any ideas?

    Read the article

  • Is is better to combine Apache for file manipulation and upload and Nginx for static file serving, or to use one of the two alone

    - by user1032393
    Based on my research, I've read that nginx is best and ideal for serving up static files and images. My application depends heavily on uploading of images and rewriting them, then serving them up. Given that I only have one VPS currently, it has been suggested that I use nginx to serve up the images and website, and reverse proxy to Apache (on the same VPS) to rewrite files with image magick and handle the file uploads. Which would be the best solution, Apache, Nginx, or Apache + Nginx? In terms of best solution, I'm looking at minimal average RAM consumption, while maintaining decent load speed of maybe sub 2 seconds?

    Read the article

  • Following the Thread in OSB

    - by Antony Reynolds
    Threading in OSB The Scenario I recently led an OSB POC where we needed to get high throughput from an OSB pipeline that had the following logic: 1. Receive Request 2. Send Request to External System 3. If Response has a particular value   3.1 Modify Request   3.2 Resend Request to External System 4. Send Response back to Requestor All looks very straightforward and no nasty wrinkles along the way.  The flow was implemented in OSB as follows (see diagram for more details): Proxy Service to Receive Request and Send Response Request Pipeline   Copies Original Request for use in step 3 Route Node   Sends Request to External System exposed as a Business Service Response Pipeline   Checks Response to Check If Request Needs to Be Resubmitted Modify Request Callout to External System (same Business Service as Route Node) The Proxy and the Business Service were each assigned their own Work Manager, effectively giving each of them their own thread pool. The Surprise Imagine our surprise when, on stressing the system we saw it lock up, with large numbers of blocked threads.  The reason for the lock up is due to some subtleties in the OSB thread model which is the topic of this post.   Basic Thread Model OSB goes to great lengths to avoid holding on to threads.  Lets start by looking at how how OSB deals with a simple request/response routing to a business service in a route node. Most Business Services are implemented by OSB in two parts.  The first part uses the request thread to send the request to the target.  In the diagram this is represented by the thread T1.  After sending the request to the target (the Business Service in our diagram) the request thread is released back to whatever pool it came from.  A multiplexor (muxer) is used to wait for the response.  When the response is received the muxer hands off the response to a new thread that is used to execute the response pipeline, this is represented in the diagram by T2. OSB allows you to assign different Work Managers and hence different thread pools to each Proxy Service and Business Service.  In out example we have the “Proxy Service Work Manager” assigned to the Proxy Service and the “Business Service Work Manager” assigned to the Business Service.  Note that the Business Service Work Manager is only used to assign the thread to process the response, it is never used to process the request. This architecture means that while waiting for a response from a business service there are no threads in use, which makes for better scalability in terms of thread usage. First Wrinkle Note that if the Proxy and the Business Service both use the same Work Manager then there is potential for starvation.  For example: Request Pipeline makes a blocking callout, say to perform a database read. Business Service response tries to allocate a thread from thread pool but all threads are blocked in the database read. New requests arrive and contend with responses arriving for the available threads. Similar problems can occur if the response pipeline blocks for some reason, maybe a database update for example. Solution The solution to this is to make sure that the Proxy and Business Service use different Work Managers so that they do not contend with each other for threads. Do Nothing Route Thread Model So what happens if there is no route node?  In this case OSB just echoes the Request message as a Response message, but what happens to the threads?  OSB still uses a separate thread for the response, but in this case the Work Manager used is the Default Work Manager. So this is really a special case of the Basic Thread Model discussed above, except that the response pipeline will always execute on the Default Work Manager.   Proxy Chaining Thread Model So what happens when the route node is actually calling a Proxy Service rather than a Business Service, does the second Proxy Service use its own Thread or does it re-use the thread of the original Request Pipeline? Well as you can see from the diagram when a route node calls another proxy service then the original Work Manager is used for both request pipelines.  Similarly the response pipeline uses the Work Manager associated with the ultimate Business Service invoked via a Route Node.  This actually fits in with the earlier description I gave about Business Services and by extension Route Nodes they “… uses the request thread to send the request to the target”. Call Out Threading Model So what happens when you make a Service Callout to a Business Service from within a pipeline.  The documentation says that “The pipeline processor will block the thread until the response arrives asynchronously” when using a Service Callout.  What this means is that the target Business Service is called using the pipeline thread but the response is also handled by the pipeline thread.  This implies that the pipeline thread blocks waiting for a response.  It is the handling of this response that behaves in an unexpected way. When a Business Service is called via a Service Callout, the calling thread is suspended after sending the request, but unlike the Route Node case the thread is not released, it waits for the response.  The muxer uses the Business Service Work Manager to allocate a thread to process the response, but in this case processing the response means getting the response and notifying the blocked pipeline thread that the response is available.  The original pipeline thread can then continue to process the response. Second Wrinkle This leads to an unfortunate wrinkle.  If the Business Service is using the same Work Manager as the Pipeline then it is possible for starvation or a deadlock to occur.  The scenario is as follows: Pipeline makes a Callout and the thread is suspended but still allocated Multiple Pipeline instances using the same Work Manager are in this state (common for a system under load) Response comes back but all Work Manager threads are allocated to blocked pipelines. Response cannot be processed and so pipeline threads never unblock – deadlock! Solution The solution to this is to make sure that any Business Services used by a Callout in a pipeline use a different Work Manager to the pipeline itself. The Solution to My Problem Looking back at my original workflow we see that the same Business Service is called twice, once in a Routing Node and once in a Response Pipeline Callout.  This was what was causing my problem because the response pipeline was using the Business Service Work Manager, but the Service Callout wanted to use the same Work Manager to handle the responses and so eventually my Response Pipeline hogged all the available threads so no responses could be processed. The solution was to create a second Business Service pointing to the same location as the original Business Service, the only difference was to assign a different Work Manager to this Business Service.  This ensured that when the Service Callout completed there were always threads available to process the response because the response processing from the Service Callout had its own dedicated Work Manager. Summary Request Pipeline Executes on Proxy Work Manager (WM) Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM. Route Node Request sent using Proxy WM Thread Proxy WM Thread is released before getting response Muxer is used to handle response Muxer hands off response to Business Service (BS) WM Response Pipeline Executes on Routed Business Service WM Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM. No Route Node (Echo functionality) Proxy WM thread released New thread from the default WM used for response pipeline Service Callout Request sent using proxy pipeline thread Proxy thread is suspended (not released) until the response comes back Notification of response handled by BS WM thread so limited by setting of that WM.  If no WM specified then uses WLS default WM. Note this is a very short lived use of the thread After notification by callout BS WM thread that thread is released and execution continues on the original pipeline thread. Route/Callout to Proxy Service Request Pipeline of callee executes on requestor thread Response Pipeline of caller executes on response thread of requested proxy Throttling Request message may be queued if limit reached. Requesting thread is released (route node) or suspended (callout) So what this means is that you may get deadlocks caused by thread starvation if you use the same thread pool for the business service in a route node and the business service in a callout from the response pipeline because the callout will need a notification thread from the same thread pool as the response pipeline.  This was the problem we were having. You get a similar problem if you use the same work manager for the proxy request pipeline and a business service callout from that request pipeline. It also means you may want to have different work managers for the proxy and business service in the route node. Basically you need to think carefully about how threading impacts your proxy services. References Thanks to Jay Kasi, Gerald Nunn and Deb Ayers for helping to explain this to me.  Any errors are my own and not theirs.  Also thanks to my colleagues Milind Pandit and Prasad Bopardikar who travelled this road with me. OSB Thread Model Great Blog Post on Thread Usage in OSB

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • better to wait Drupal 7 to start learning Drupal, or no big difference with Drupal 6?

    - by artmania
    Hi friends, I'm going to start some projects in next weeks. I currently have my custom cms, but i wanna go for opensource cms solution from now on. and as i researched Drupal is COOL for any kind of project from small scope to most complex ones. I have never used Drupal before, I know nothing about it yet. Would it worth to work on Drupal 6 for now, or better to wait for Drupal 7? is there any big difference about the way it works. i would feel silly to spend days-weeks to learn Drupal 6 now and after few months when Drupal 7 beta is out, a new learning curve and having troubles to upgrade to Drupal 7 my current Drupal 6 projects? Thanks a lot!! appreciate advises!

    Read the article

  • How to draw a better looking Graph (A4 size) in Dot?

    - by Nazgulled
    Hi, I have this project that it's due in a few hours and I still have a report to write... The project has nothing to do with Dot, but we were asked to draw a Graph with Dot, which I did. It looks something like this: http://img683.imageshack.us/img683/9735/dotj.jpg The longer arrows represent smaller weights and the shorter arrows represent bigger weights. There isn't any problem in submitting my project like this, it does what's is supposed to do and this Dot thing is just an extra. But I would like to make it pretty, I just don't have time to learn about Dot right now. Basically, all I want is make pretty. Perhaps, a bigger height for the page, like A4 paper size. And have the graph display more to the bottom than everything to the side. What should I put on my .dot file to make it look better?

    Read the article

  • ergonomics: what's better; trackball, ergonomic mouse or some other pointing device (a-la touchscree

    - by mauriciopastrana
    So I bit into the hype and recently purchased an apple wireless keyboard and that evil bar-of-soap thing apple makes for a mouse. Couple of hundred dollars later and this is where I begin to worry about RSI. Go figure. Don't get me wrong, this apple mouse is genius and looks pretty as hell, but my right wrist feels tired after a full day's worth of work, so i'm thinking of switching. Anyone out there use a trackball? is this worse? should I get a super-ergonomic mouse instead? I've seen mouse-trackball combos but am not sold, they still elicit the same end-finger behaviour detrimental for RSI, right? I also have a wrist-rest mousepad, but couldn't find one suitable for my keyboard. I've even considered having a small touchscreen where the mousepad should go, no mouse (or alternatively, a usb trackpad). Just looking for ideas, is the trackball better than the mouse? /mp

    Read the article

  • What is better: set up underestimated or overestimated deadlines?

    - by sergdev
    Suppose you are a project manager. You can estimate an effort in days for specific task for specific developer. After performing estimation you obtain some min and max values. After this you delegate a task to developer. Actually you also set up deadline. Which estimation is better to use when set up deadline: min or max? As I see min estimation can result in stress for developer, max estimation can result in using all the time which is allocated to developer even if task can be complete faster (so called Student syndrome). Which other pros and cons of two approaches? Small clarification: I speak about setting up deadlines for subordinates when delegating the task, NOT for reporting to my boss.

    Read the article

  • Which programming languages have helped you to understand programming better?

    - by Xaisoft
    Which programming languages not only make you more proficient in the particular language your are learning, but also have a direct impact on the way you think and understand programming in general; therefore, making you a better programmer in other languages. Basically, which languages have the biggest impact on understanding the how and why of different programming concepts? What about Scheme? I have heard good things about that. I thought about taking the simplest of problems and implementing them in various languages. Has anyone done this?

    Read the article

  • Is there a better way than this to find out if an IsolatedStorage file exists or not?

    - by Edward Tanguay
    I'm using IsolatedStorage in a Silverlight application for caching, so I need to know if the file exists or not which I do with the following method. I couldn't find a FileExists method for IsolatedStorage so I'm just catching the exception, but it seems to be a quite general exception, I'm concerned it will catch more than if the file doesn't exist. Is there a better way to find out if a file exists in IsolatedStorage than this: public static string LoadTextFromIsolatedStorageFile(string fileName) { string text = String.Empty; using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication()) { try { using (IsolatedStorageFileStream isfs = new IsolatedStorageFileStream(fileName, FileMode.Open, isf)) { using (StreamReader sr = new StreamReader(isfs)) { string lineOfData = String.Empty; while ((lineOfData = sr.ReadLine()) != null) text += lineOfData; } } return text; } catch (IsolatedStorageException ex) { return ""; } } }

    Read the article

  • Is there a better way to convert SQL datetime from hh:mm:ss to hhmmss?

    - by Johann J.
    I have to write an SQL view that returns the time part of a datetime column as a string in the format hhmmss (apparently SAP BW doesn't understand hh:mm:ss). This code is the SAP recommended way to do this, but I think there must be a better, more elegant way to accomplish this TIME = case len(convert(varchar(2), datepart(hh, timecolumn))) when 1 then /* Hour Part of TIMES */ case convert(varchar(2), datepart(hh, timecolumn)) when '0' then '24' /* Map 00 to 24 ( TIMES ) */ else '0' + convert(varchar(1), datepart(hh, timecolumn)) end else convert(varchar(2), datepart(hh, timecolumn)) end + case len(convert(varchar(2), datepart(mi, timecolumn))) when 1 then '0' + convert(varchar(1), datepart(mi, timecolumn)) else convert(varchar(2), datepart(mi, timecolumn)) end + case len(convert(varchar(2), datepart(ss, timecolumn))) when 1 then '0' + convert(varchar(1), datepart(ss, timecolumn)) else convert(varchar(2), datepart(ss, timecolumn)) end This accomplishes the desired result, 21:10:45 is displayed as 211045. I'd love for something more compact and easily readable but so far I've come up with nothing that works.

    Read the article

  • Is regex too slow? Real life examples where simple non-regex alternative is better

    - by polygenelubricants
    I've seen people here made comments like "regex is too slow!", or "why would you do something so simple using regex!" (and then present a 10+ lines alternative instead), etc. I haven't really used regex in industrial setting, so I'm curious if there are applications where regex is demonstratably just too slow, AND where a simple non-regex alternative exists that performs significantly (maybe even asymptotically!) better. Obviously many highly-specialized string manipulations with sophisticated string algorithms will outperform regex easily, but I'm talking about cases where a simple solution exists and significantly outperforms regex. What counts as simple is subjective, of course, but I think a reasonable standard is that if it uses only String, StringBuilder, etc, then it's probably simple.

    Read the article

  • Does this schema sound better suited for a document-oriented data store or relational?

    - by Blaine LaFreniere
    Disclaimer: let me know if this question is better suited for serverfault.com I want to store information on music, specifically: genres artists albums songs This information will be used in a web application, and I want people to be able to see all of the songs associated to an album, and albums associated to an artist, and artists associated to a genre. I'm currently using MySQL, but before I make a decision to switch I want to know: How easy is scaling horizontally? Is it easier to manage than an SQL based solution? Would the above data I want to store be too hard to do schema-free? When I think association, I immediately think RDBMSs; can data be stored in something like CouchDB but still have some kind of association as stated above?

    Read the article

  • What is better, a STL list or a STL Map for 20 entries, considering order of insertion is as importa

    - by Abhijeet
    I have the following scenario.The implementation is required for a real time application. 1)I need to store at max 20 entries in a container(STL Map, STL List etc). 2)If a new entry comes and 20 entries are already present i have to overwrite the oldest entry with the new entry. Considering point 2, i feel if the container is full (Max 20 entries) 'list' is the best bet as i can always remove the first entry in the list and add the new one at last (push_back). However, search won't be as efficient. For only 20 entries, does it really make a big difference in terms of searching efficiency if i use a list in place of a map? Also considering the cost of insertion in map i feel i should go for a list? Could you please tell what is a better bet for me ?

    Read the article

  • transform:translateX vs transition on left property. Which has better performance? CSS

    - by JackMahoney
    I'm making a slide out menu with HTML and CSS3 - especially transitions. I would like to know what is best practice / best performance to slide a relatively positioned div horizontally. When i click a button it adds a class to my div. Which class is better? (Note I can add all the browser prefixes later and this site only targets modern browsers). //option 1 .animate{ -webkit-transition:all ease 0.3s; -webkit-transform:translateZ(200px); } //option 2 .animate{ -webkit-transition:all ease 0.3s; left:200px; } Thanks

    Read the article

  • JRuby wrong element type class java.lang.String(array contains char) related to JAVA_HOME

    - by Daryl
    I am on Ubuntu x64 bit running: java version "1.6.0_18" OpenJDK Runtime Environment (IcedTea6 1.8) (6b18-1.8-0ubuntu1) OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode) and jruby 1.4.0 (ruby 1.8.7 patchlevel 174) (2010-02-11 6586) (OpenJDK 64-Bit Server VM 1.6.0_18) [amd64-java] I have this code running on my Windows 7 computer at home. I recently copied over my whole folder over to Ubuntu, installed java, jruby, and associated gems but I get this error when I run my main file: jruby run.rb test =================Processing FREDERICKSBURG_1.1======================= ERROR IN TESTING wrong element type class java.lang.String(array contains char) /home/daryl/Desktop/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `to_java' /home/daryl/Desktop/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `split' /home/daryl/Desktop/work/Code/geografikos/lib/models/page.rb:103:in `sentences' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/lingpipe_svm.rb:34:in `extract' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:9:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:111:in `generate_all' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:105:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:105:in `generate_all' run.rb:56 The focus of the error is: ERROR IN TESTING wrong element type class java.lang.String(array contains char) Everything works fine on my windows machine. I figured I was getting this error because I did not have JAVA_HOME set however I added this to bashrc as: export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk and have confirmed: echo $JAVA_HOME /usr/lib/jvm/java-1.6.0-openjdk I can produce a similar error by removing my JAVA_HOME variable on windows: =================Processing FREDERICKSBURG_1.3======================= ERROR IN TESTING cannot convert instance of class org.jruby.RubyString to char C:/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `to_java' C:/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `split' C:/work/Code/geografikos/lib/models/page.rb:103:in `sentences' C:/work/Code/geografikos/lib/extractor/lingpipe_svm.rb:34:in `extract' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:9:in `process' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `each' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `process' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `each' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `process' C:/work/Code/geografikos/lib/statistics.rb:111:in `generate_all' C:/work/Code/geografikos/lib/statistics.rb:105:in `each' C:/work/Code/geografikos/lib/statistics.rb:105:in `generate_all' run.rb:56 It is obviously not exactly the same but I have a feeling this has to do with the java path. You can probably derive from the error that I am just trying to convert a ruby variable to java using to_java. This works fine on my windows machine and I have confirmed the gems are the same but I don't think this has to do with gems. I lied. I changed my JAVA_HOME back on my windows machine and this error still occurs. So now the code doesn't run on either machine. I recently installed git on my windows machine and added the code to a repository. But I haven't really done anything with it. All it said was it will convert all LF to CRLF...That shouldn't change anything though should it? Any ideas on why I am now getting these errors? I haven't changed anything on my windows machine in months except for installing git.

    Read the article

  • Better way to write this regex to match multi-ordered property list?

    - by Andrew Philips
    I've been whacking on this regex for a while, trying to build something that can pick out multiple ordered property values (DTSTART, DTEND, SUMMARY) from an .ics file. I have other options (like reading one line at a time and scanning), but wanted to build a single regex that can handle the whole thing. SAMPLE PERL # There has got to be a better way... my $x1 = '(?:^DTSTART[^\:]*:(?<dts>.*?)$)'; my $x2 = '(?:^DTEND[^\:]*:(?<dte>.*?)$)'; my $x3 = '(?:^SUMMARY[^\:]*:(?<dtn>.*?)$)'; my $fmt = "$x1.*$x2.*$x3|$x1.*$x3.*$x2|$x2.*$x1.*$x3|$x2.*$x3.*$x1|$x3.*$x1.*$x2|$x3.*$x2.*$x1"; if ($evts[1] =~ /$fmt/smo) { printf "lines:\n==>\n%s\n==>\n%s\n==>\n%s\n", $+{dts}, $+{dte}, $+{dtn}; } else { print "Failed.\n"; } SAMPLE DATA BEGIN:VEVENT UID:0A5ECBC3-CAFB-4CCE-91E3-247DF6C6652A TRANSP:OPAQUE SUMMARY:Gandalf_flinger1 DTEND:20071127T170005 DTSTART,lang=en_us:20071127T103000 DTSTAMP:20100325T003424Z X-APPLE-EWS-BUSYSTATUS:BUSY SEQUENCE:0 END:VEVENT SAMPLE OUTPUT lines: == 20071127T103000 == 20071127T170005 == Gandalf_flinger1

    Read the article

  • Android: Is it better to start and stop a service each time it is needed or to let a service run and

    - by Flo
    I'm developing an app that checks several conditions during an incoming phone call. The main parts of the app are a BroadcastReceiver listening for Intents related to the phone's status and a local Service checking the conditions. At the moment the service is started each time an incoming call is detected and is stopped when the phone status changed back to idle. Now I'm wondering if this procedure is correct and whether it is reasonable to start and stop the service related to the phone's status. Or would it be better to let the service run regardless of the phone's status and bind/unbind to/from it when needed. Are there any performance issues I would have to think about? Perhaps it is more expensive to start/stop a service than letting it run and communicate with it. Are there any best practices out there regarding the implementation of services?

    Read the article

  • Ruby: Is there a better way to iterate over multiple (big) files?

    - by zxcvbnm
    Here's what I'm doing (sorry for the variable names, I'm not using those in my code): File.open("out_file_1.txt", "w") do |out_1| File.open("out_file_2.txt", "w") do |out_2| File.open_and_process("in_file_1.txt", "r") do |in_1| File.open_and_process("in_file_2.txt", "r") do |in_2| while line_1 = in_1.gets do line_2 = in_2.gets #input files have the same number of lines #process data and output to files end end end end end The open_and_process method is just to open the file and close it once it's done. It's taken from the pickaxe book. Anyway, the main problem is that the code is nested too deeply. I can't load all the files' contents into memory, so I have to iterate line by line. Is there a better way to do this? Or at least prettify it?

    Read the article

  • Clever ways to better test GPS code using only the iPhone simulator?

    - by Patty
    I've been playing around with the iPhone SDK, using MapKit and Core Location. What are some of the tricks you can use to better test things... while still on the simulator (long before I have to try it out on my iPhone). Is there a way to use NSTimer and regularly get 'pretend' values for location, heading, speed, etc? The simulator only giving 1 location... and no movement... really limits its 'testing' usefulness.

    Read the article

  • Is there a better way to create an object-oriented class with jquery?

    - by Devon
    I use the jquery extend function to extend a class prototype. For example: MyWidget = function(name_var) { this.init(name_var); } $.extend(MyWidget.prototype, { // object variables widget_name: '', init: function(widget_name) { // do initialization here this.widget_name = widget_name; }, doSomething: function() { // an example object method alert('my name is '+this.widget_name); } }); // example of using the class built above var widget1 = new MyWidget('widget one'); widget1.doSomething(); Is there a better way to do this? Is there a cleaner way to create the class above with only one statement instead of two?

    Read the article

  • Which is a better option, OpenGL or a game engine for developing a game for the iphone or ipod touch

    - by balraj
    I am new to OpenGL ES, and I'm about to begin a 3D game for the iphone in which we are showing some car pursuit or racing. Is it possible just with the OpenGL ES or UIKit only, or do I have to use other tools for it? I am comfortable with UIKit but newer to OpenGL/OpenGL ES; which would be better to start this game? Or should I use a game engine? If so, then which game engine would give us the 3D feeling, quality of images and motion, and rendering of the views with the sound effects?

    Read the article

  • Which is better for quickly developing small utilities? AutoIt or AutoHotKey?

    - by Abhijeet Pathak
    Which is better for quickly developing small utilities? AutoIt or AutoHotKey or something else? I need to develop some small software for which I think using some professional suite like Visual Studio will be overkill. Most of the macro recording tools like AutoIt or AutoHotKey provide enough power to write decent application. Plus they are small and free. Which option will be good? Using one of these tools or using some other small/free compiler?

    Read the article

  • Is there a better way to avoid an infinite loop using winforms?

    - by Hamish Grubijan
    I am using .Net 3.5 for now. Right now I am using a using trick to disable and enable events around certain sections of code. The user can change either days, hours, minutes or total minutes, and that should not cause an infinite cascade of events (e.g. minutes changing total, total changing minutes, etc.) While the code does what I want, there might be a better / more straight-forward way. Do you know of any? For brawny points: This control will be used by multiple teams - I do not want to make it embarrassing. I suspect that I do not need to reinvent the wheel when defining hours in a day, days in week, etc. Some other standard .Net library out there must have it. Any other remarks regarding code? This using (EventHacker.DisableEvents(this)) business - that must be a common pattern in .Net ... changing the setting temporarily. What is the name of it? I'd like to be able to refer to it in a comment and also read up more on current implementations. In the general case not only a handle to the thing being changed needs to be remembered, but also the previous state (in this case previous state does not matter - events are turned on and off unconditionally). Then there is also a possibility of multi-threaded hacking. One could also utilize generics to make the code arguably cleaner. Figuring all this out can lead to a multi-page blog post. I'd be happy to hear some of the answers. P.S. Does it seem like I suffer from obsessive compulsive disorder? Some people like to get things finished and move on; I like to keep them open ... there is always a better way. // Corresponding Designer class is omitted. using System; using System.Windows.Forms; namespace XYZ // Real name masked { interface IEventHackable { void EnableEvents(); void DisableEvents(); } public partial class PollingIntervalGroupBox : GroupBox, IEventHackable { private const int DAYS_IN_WEEK = 7; private const int MINUTES_IN_HOUR = 60; private const int HOURS_IN_DAY = 24; private const int MINUTES_IN_DAY = MINUTES_IN_HOUR * HOURS_IN_DAY; private const int MAX_TOTAL_DAYS = 100; private static readonly decimal MIN_TOTAL_NUM_MINUTES = 1; // Anything faster than once per minute can bog down our servers. private static readonly decimal MAX_TOTAL_NUM_MINUTES = (MAX_TOTAL_DAYS * MINUTES_IN_DAY) - 1; // 99 days should be plenty. // The value above was chosen so to not cause an overflow exception. // Watch out for it - numericUpDownControls each have a MaximumValue setting. public PollingIntervalGroupBox() { InitializeComponent(); InitializeComponentCustom(); } private void InitializeComponentCustom() { this.m_upDownDays.Maximum = MAX_TOTAL_DAYS - 1; this.m_upDownHours.Maximum = HOURS_IN_DAY - 1; this.m_upDownMinutes.Maximum = MINUTES_IN_HOUR - 1; this.m_upDownTotalMinutes.Maximum = MAX_TOTAL_NUM_MINUTES; this.m_upDownTotalMinutes.Minimum = MIN_TOTAL_NUM_MINUTES; } private void m_upDownTotalMinutes_ValueChanged(object sender, EventArgs e) { setTotalMinutes(this.m_upDownTotalMinutes.Value); } private void m_upDownDays_ValueChanged(object sender, EventArgs e) { updateTotalMinutes(); } private void m_upDownHours_ValueChanged(object sender, EventArgs e) { updateTotalMinutes(); } private void m_upDownMinutes_ValueChanged(object sender, EventArgs e) { updateTotalMinutes(); } private void updateTotalMinutes() { this.setTotalMinutes( MINUTES_IN_DAY * m_upDownDays.Value + MINUTES_IN_HOUR * m_upDownHours.Value + m_upDownMinutes.Value); } public decimal TotalMinutes { get { return m_upDownTotalMinutes.Value; } set { m_upDownTotalMinutes.Value = value; } } public decimal TotalHours { set { setTotalMinutes(value * MINUTES_IN_HOUR); } } public decimal TotalDays { set { setTotalMinutes(value * MINUTES_IN_DAY); } } public decimal TotalWeeks { set { setTotalMinutes(value * DAYS_IN_WEEK * MINUTES_IN_DAY); } } private void setTotalMinutes(decimal nTotalMinutes) { if (nTotalMinutes < MIN_TOTAL_NUM_MINUTES) { setTotalMinutes(MIN_TOTAL_NUM_MINUTES); return; // Must be carefull with recursion. } if (nTotalMinutes > MAX_TOTAL_NUM_MINUTES) { setTotalMinutes(MAX_TOTAL_NUM_MINUTES); return; // Must be carefull with recursion. } using (EventHacker.DisableEvents(this)) { // First set the total minutes this.m_upDownTotalMinutes.Value = nTotalMinutes; // Then set the rest this.m_upDownDays.Value = (int)(nTotalMinutes / MINUTES_IN_DAY); nTotalMinutes = nTotalMinutes % MINUTES_IN_DAY; // variable reuse. this.m_upDownHours.Value = (int)(nTotalMinutes / MINUTES_IN_HOUR); nTotalMinutes = nTotalMinutes % MINUTES_IN_HOUR; this.m_upDownMinutes.Value = nTotalMinutes; } } // Event magic public void EnableEvents() { this.m_upDownTotalMinutes.ValueChanged += this.m_upDownTotalMinutes_ValueChanged; this.m_upDownDays.ValueChanged += this.m_upDownDays_ValueChanged; this.m_upDownHours.ValueChanged += this.m_upDownHours_ValueChanged; this.m_upDownMinutes.ValueChanged += this.m_upDownMinutes_ValueChanged; } public void DisableEvents() { this.m_upDownTotalMinutes.ValueChanged -= this.m_upDownTotalMinutes_ValueChanged; this.m_upDownDays.ValueChanged -= this.m_upDownDays_ValueChanged; this.m_upDownHours.ValueChanged -= this.m_upDownHours_ValueChanged; this.m_upDownMinutes.ValueChanged -= this.m_upDownMinutes_ValueChanged; } // We give as little info as possible to the 'hacker'. private sealed class EventHacker : IDisposable { IEventHackable _hackableHandle; public static IDisposable DisableEvents(IEventHackable hackableHandle) { return new EventHacker(hackableHandle); } public EventHacker(IEventHackable hackableHandle) { this._hackableHandle = hackableHandle; this._hackableHandle.DisableEvents(); } public void Dispose() { this._hackableHandle.EnableEvents(); } } } }

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >