Search Results

Search found 66916 results on 2677 pages for 'real time strategy'.

Page 190/2677 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • In Jetty, can NCSARequestLog be configured to auto-detect the time zone?

    - by David Moles
    It looks as though in 2003 or so NCSARequestLog would pick up the system time zone using TimeZone.getDefault().getID(), but in more recent versions it seems as though it's hard-coded to GMT, unless you override it in jetty.xml or programmatically. If true, this kind of annoying, since it means if you don't want GMT logging you have to edit jetty.xml for every installation (not to mention twice a year for daylight saving time). Is there a workaround that will let Jetty pick the time zone up from the system? (X-posted to Stack Overflow.)

    Read the article

  • After a period of time, nslookup still works, but pinging, and an auto-refeshed website, fails.

    - by Mark Hurd
    Contrary to this SO question this is for a dotted name (gw.localnet.au), and it doesn't happen straight away. Only after some period of time (quite a long time, possibly days). In fact this is for my ADSL router and its internal IP address which I have named within the router itself and in my Windows Server 2003 Domain Controller DNS Service. Specifically, localnet.au is a Active-Directory-backed primary domain. In fact, an ipconfig /flushdns may fix the problem, but only after a while (about the time it took me to type in this question :-) ). That doesn't explain the root cause though... Manually transferred from stackoverflow.com

    Read the article

  • how did my laptop lose daylight savings time on power loss?

    - by EndangeringSpecies
    it was an old laptop that may or may not have dead CMOS battery (at least it had time reset to 0 when I dusted it off). So, it was plugged into power for awhile without the main battery and the clock was correct. Then there was a power outage. When I subsequently turned it on, surprise-surprise, it had one hour off correct time. So the clock apparently kept running during the outage, but daylight savings time info vanished (from disk? where is it stored?) How come?

    Read the article

  • Window will read this particular DVD which it usually doesn't, if it spins for a longer time

    - by curious_kid
    I'm unable to figure out what's wrong with my system. My computer will read this particular DVD sometimes and mostly it won't. When trying to explore the DVD, I get error "Windows can't access this disc" ( note : It doesn't says that the "disc is empty", the normal error which I generally get when I try to read a scratched DVD ). Ok now the mysterious part, I have noticed that everytime DVD spins for a longer time ( like for 1 -2 minutes), window will read it without any problem. Extra information : The DVD I'm talking about doesn't have any scratches. I tried cleaning the DVD Drive with a lens cleaning CD. DVD was burned in multisessions. My system have window 8 installed. Question I wanted to ask : What makes DVD spins for a longer period of time sometimes ? If possible how can I make window spin the DVD for longer time ? Any solution to my problem ?

    Read the article

  • How do I get rid of the Cisco CP one-time user message on a Cisco 2901 router?

    - by user65712
    I get this message on a new 2901 Cisco router: Cisco Configuration Professional (Cisco CP) is installed on this device and it provides the default username "cisco" for one-time use. If you have already used the username "cisco" to login to the router and your IOS image supports the "one-time" user option, then this username has already expired. You will not be able to login to the router with this username after you exit this session. It is strongly suggested that you create a new username with a privilege level of 15 using the following command. username <myuser> privilege 15 secret 0 <mypassword> Replace and with the username and password you want to use. It appears every time I login and everytime I show run. I already set up my username. How do I turn it off?

    Read the article

  • what can be reasons for localhost responding super-slow the first time a page is requested?

    - by frequent
    Still learning my server-ways with Apache2.2/MySQL5.2/Coldfusion8 on localhost (running Windows XP) What I notice is every time I request a page for the first time after firing up Coldfusion and Apache, localhost needs forever (1+ minute to respond and send the inital page). After that all seems to run at normal loading time. I'm using require.js to pull in Jquery, Jquery Mobile and two other plugins on first page load, but loading the same page from a real server works normally, so I'm also ruling out this as a probable cause. Since it happens regardless of which page I'm loading first it shoud not be page related, so I'm looking for other clues on why this could happen. Thanks for some thought!

    Read the article

  • How do you use Time Capsule for MAC controlled Internet?

    - by Kevin Perttula
    I'm using an Internet Provider that requires a username and password to log in; the prompt shows up as soon as you open a web browser. Because of this, I can only have one device online at a time. How can I get my Time Capsule to essentially log in as the user and allow other devices to connect through it. My Time Capsule has worked with other ISPs that don't require the user log in, but I recently moved. I may be wrong about how the ISP is allowing access, I wasn't sure how to describe the problem. Thanks.

    Read the article

  • What is a realistic average time difference between servers in the same LAN?

    - by monster
    Until recently, we had at work a small cluster of about 20 small Windows servers (which have now all been virtualized). They were all configured to synchronize with the local time server. It was on an 1Gb sub-network in our own DC. I never got them to be less than about 100ms away from each other, which I consider to be an incredibly big difference. Is that a normal value? What is a realistic expectation of time difference between machines running on a 1Gb network, and all connected to the same time server, and updating frequently, say every 5 minutes? I would like to know this as setting timeouts and other parameters in a distributed application requires to take that difference into consideration.

    Read the article

  • Calculate location and number of intersections between multiple date/time ranges?

    - by Patricker
    I need to calculate the location of intersections between multiple date ranges, and the number of overlapping intersections. Then I need to show which date/time ranges overlap each of those intersecting sections. It is slightly more complicated than that so I'll do my best to explain by providing an example. I am working in VB.Net, but C# examples are acceptable as well as I work in both. We have several high risk tasks that involve the same system. Below I have three example jobs named HR1/2/3/4 with start and end date/times. HR1 {1/6/2010 10:00 - 1/6/2010 15:00} HR2 {1/6/2010 11:00 - 1/6/2010 18:00} HR3 {1/6/2010 12:00 - 1/6/2010 14:00} HR4 {1/6/2010 18:00 - 1/6/2010 20:00} What I want the end result to be is shown below. I am having trouble describing it any way but by example. HRE1 {1/6/2010 10:00 - 1/6/2010 11:00} - Intersects 1 {End Time Split 1, for readability only, not needed in solution} HRE1 {1/6/2010 11:00 - 1/6/2010 12:00} - Intersects 2 HRE2 {1/6/2010 11:00 - 1/6/2010 12:00} - Intersects 2 {End Time Split 2, for readability only, not needed in solution} HRE1 {1/6/2010 12:00 - 1/6/2010 14:00} - Intersects 3 HRE2 {1/6/2010 12:00 - 1/6/2010 14:00} - Intersects 3 HRE3 {1/6/2010 12:00 - 1/6/2010 14:00} - Intersects 3 {End Time Split 3, for readability only, not needed in solution} HRE1 {1/6/2010 14:00 - 1/6/2010 15:00} - Intersects 2 HRE2 {1/6/2010 14:00 - 1/6/2010 15:00} - Intersects 2 {End Time Split 4, for readability only, not needed in solution} HRE2 {1/6/2010 15:00 - 1/6/2010 18:00} - Intersects 1 {End Time Split 5, for readability only, not needed in solution} HR4 {1/6/2010 18:00 - 1/6/2010 20:00} - Intersects 1 Any help would be greatly appreciated.

    Read the article

  • Unable to Get a Correct Time when I am Calling serverTime using jquery.countdown.js + Asp.net ?

    - by user312891
    When i am calling the below function I unable to get a correct Answer. Both var Shortly and newTime having same time one coming from the client site other sync with server. http://keith-wood.name/countdown.html I am waiting from your response. Thanks $(function() { var shortly = new Date('April 9, 2010 20:38:10'); var newTime = new Date('April 9, 2010 20:38:10'); //for loop divid /// $('#defaultCountdown').countdown({ until: shortly, onExpiry: liftOff, onTick: watchCountdown, serverSync: serverTime }); $('#div1').countdown({ until: newTime }); }); function serverTime() { var time = null; $.ajax({ type: "POST", //Page Name (in which the method should be called) and method name url: "Default.aspx/GetTime", // If you want to pass parameter or data to server side function you can try line contentType: "application/json; charset=utf-8", dataType: "json", data: "{}", async: false, //else If you don't want to pass any value to server side function leave the data to blank line below //data: "{}", success: function(msg) { //Got the response from server and render to the client time = new Date(msg.d); alert(time); }, error: function(msg) { time = new Date(); alert('1'); } }); return time; }

    Read the article

  • Last week I was presented with a Microsoft MVP award in Virtual Machines – time to thank all who hel

    - by Liam Westley
    MVP in Virtual Machines Last week, on 1st April, I received an e-mail from Microsoft letting me know that I had been presented with a 2010 Microsoft® MVP Award for outstanding contributions in Virtual Machine technical communities during the past year.   It was an honour to be nominated, and is a great reflection on the vibrancy of the UK user group community which made this possible. Virtualisation for developers, not just IT Pros I consider it a special honour as my expertise in virtualisation is as a software developer utilising virtual machines to aid my software development, rather than an IT Pro who manages data centre and network infrastructure.  I’ve been on a minor mission over the past few years to enthuse developers in a topic usually seen as only for network admins, but which can make their life a whole lot easier once understood properly. Continuous learning is fun In 1676, the scientist Isaac Newton, in a letter to Robert Hooke used the phrase (http://www.phrases.org.uk/meanings/268025.html) ‘If I have seen a little further it is by standing on the shoulders of Giants’ I’m a nuclear physicist by education, so I am more than comfortable that any knowledge I have is based on the work of others.  Although far from a science, software development and IT is equally built upon the work of others. It’s one of the reasons I despise software patents. So in that sense this MVP award is a result of all the great minds that have provided virtualisation solutions for me to talk about.  I hope that I have always acknowledged those whose work I have used when blogging or giving presentations, and that I have executed my responsibility to share any knowledge gained as widely as possible. Thanks to all those who helped – a big thanks to the UK user group community I reckon this journey started in 2003 when I started attending a user group called the London .Net Users Group (http://www.dnug.org.uk) started by a nice chap called Ian Cooper. The great thing about Ian was that he always encouraged non professional speakers to take the stage at the user group, and my first ever presentation was on 30th September 2003; SQL Server CE 2.0 and the.NET Compact Framework. In 2005 Ian Cooper was on the committee for the first DeveloperDeveloperDeveloper! day, the free community conference held at Microsoft’s UK HQ in Thames Valley park in Reading.  He encouraged me to take part and so on 14th May 2005 I presented a talk previously given to the London .Net User Group on Simplifying access to multiple DB providers in .NET.  From that point on I definitely had the bug; presenting at DDD2, DDD3, groking at DDD4 and SQLBits I and after a break, DDD7, DDD Scotland and DDD8.  What definitely made me keen was the encouragement and infectious enthusiasm of some of the other DDD organisers; Craig Murphy, Barry Dorrans, Phil Winstanley and Colin Mackay. During the first few DDD events I met the Dave McMahon and Richard Costall from NxtGenUG who made it easy to start presenting at their user groups.  Along the way I’ve met a load of great user group organisers; Guy Smith-Ferrier of the .Net Developer Network, Jimmy Skowronski of GL.Net and the double act of Ray Booysen and Gavin Osborn behind what was Vista Squad and is now Edge UG. Final thanks to those who suggested virtualisation as a topic ... Final thanks have to go the people who inspired me to create my Virtualisation for Developers talk.  Toby Henderson (@holytshirt) ensured I took notice of Sun’s VirtualBox, Peter Ibbotson for being a fine sounding board at the Kew Railway over quite a few Adnam’s Broadside and to Guy Smith-Ferrier for allowing his user group to be the guinea pigs for the talk before it was seen at DDD7.  Thanks to all of you I now know much more about virtualisation than I would have thought possible and it continues to be great fun. Conclusion If this was an academy award acceptance speech I would have been cut off after the first few paragraphs, so well done if you made it this far.  I’ll be doing my best to do justice to the MVP award and the UK community.  I’m fortunate in having a new employer who considers presenting at user groups as a good thing, so don’t expect me to stop any time soon. If you’ve never seen me in action, then you can view the original DDD7 Virtualisation for Developers presentation (filmed by the Microsoft Channel 9 team) as part of the full DDD7 video list here, http://www.craigmurphy.com/blog/?p=1591.  Also thanks to Craig Murphy’s fine video work you can also view my latest DDD8 presentation on Commercial Software Development, here, http://vimeo.com/9216563 P.S. If I’ve missed anyone out, do feel free to lambast me in comments, it’s your duty.

    Read the article

  • Time to Get Started with Oracle Data Integrator 12c!

    - by Sandrine Riley
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} It is time to get started with Oracle Data Integrator 12c! We would like to highlight for you a great place to begin your journey with ODI.  Here you will find the Getting Started section for ODI, which provides a few options. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Step 1 – Start by downloading the Getting Started (PDF). The document provides general background information and detailed examples to help you learn how to use Oracle Data Integrator. This document guides you through a first project with Oracle Data Integrator 12c, and the instructions in this document are required for using the Getting Started Demonstration Environment. If you would like to run the Getting Started Demonstration Environment on your system go to Step 2 A. If you would like to run the Getting Started Demonstration Environment installed and pre-configured in a Virtual Image, go to Step 2 B. Step 2 - A-   A -  If you would like to run the Getting Started Demo Environment on your system, you will want to then download the Demo Environment for Getting Started (ZIP) archive containing ODI metadata, configuration scripts and sample data. B-   B -  Alternatively, a pre-configured virtual machine is available with the Getting Started installation and configuration. The Virtual Machine platform uses Oracle Virtual Box technology, and use of this configuration would necessitate the download/installation of Virtual Box and the Getting Started virtual machine. If you prefer this route and would rather run the Getting Started Demonstration Environment installed and preconfigured in a Virtual Machine, please download the ODI 12c Getting Started Virtual Machine. Step 3 – In continuation with the Getting Started Demonstration Environment and the Getting Started Guide, you will now be able to run through a first project with Oracle Data Integrator. Now that you have successfully done the exercises above, you may want to play on your own, and develop with Oracle Data Integrator on your environment...  here you can download a full version of Oracle Data Integrator 12c.  Enjoy, and happy developing! Learn more about Oracle Data Integrator; including FAQ, the Oracle by Example Series, Sample Code, and White Papers. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • Is it normal for a programmer with 2 years experience to take a long time to code simple programs?

    - by ajax81
    Hi all, I'm a relatively new programmer (18 months on the scene), and I'm finally getting to the point where I'm comfortable accepting projects and developing solutions under minimal supervision. Unfortunately, this also means that I've become acutely aware of my performance shortfalls, the most prevalent of which is the amount of time it takes me to develop, test, and submit algorithms for review. A great example of what I'm talking about occurred this week when I was tasked with developing a simple XML web service (asp.net 3.5) callable via client-side JavaScript, that accepts a single parameter and returns a dataset output to a modal window (please note this is the first time I've had to develop a web service and have had ZERO experience creating/consuming them...let alone calling them from JS client side). Keeping a long story short -- I worked on it for 4 days straight, all day each day, for a grand total of 36 hours, not including the time I spent dwelling on the problem in the shower, the morning commute, and laying awake in bed at night. I learned a great deal about web services and xml/json/javascript...but was called in for a management review to discuss the length of time it took me to develop the solution. In the meeting, I was praised for the quality of my work and was in fact told that my effort was commendable. However, they (senior leads and pm's) weren't impressed with the amount of time it took me to develop the solution and expressed that they would have liked to see the solution in roughly 1/3 of the time it took me. I guess what concerns me the most is that I've identified this pattern as common for myself. Between online videos, book research, and trial/error coding...if its something I haven't seen before, I can spend up to two weeks on a problem that seems to only take the pros in the videos moments to code up. And of course, knowing that management isn't happy with this pattern has shaken me up a bit. To sum up, I have some very specific questions I'd like to ask, and would greatly appreciate your objective professional feedback. Is my experience as a junior programmer common among new developers? Or is it possible that I'm just not cut out for the work? If you suspect that my experience is not common and that there may be an aptitude issue, do you have any suggestions/solutions that I could propose to management to help bring me up to speed? Do seasoned, professional programmers ever encounter knowledge barriers that considerably delay deliverables? When you started out in the industry, did you know how to "do it all"? If not, how long did it take you to be perceived as "proficient"? Was it a natural progression of trial and error, or was there a particular zen moment when you knew you had achieved super saiyen power level? Anyways, thanks for taking the time to read my question(s). I don't know if this is the right place to ask for professional career guidance, but I greatly appreciate your willingness to help me out. Cheers, Daniel

    Read the article

  • Android - cant read TXT files from SDcard on real mashine?

    - by JustMe
    Hello! When I run the code bellow in the virtual android (1.5) it works well, TextSwitcher shows first 80 chars from each txt file from /sdcard/documents/ , but when I run it on my Samsung Galaxy i7500 (1.6) there are no contents in TextSwitcher, however in LogCat there are FileNames of txt files. My Code: public void getTxtFiles(){ //Scan /sdcard/documents and put .txt files in array File TxtFiles[] String path = Environment.getExternalStorageDirectory().toString()+"/documents/"; String files; File folder = new File(path); if(folder.exists()==false){if (!folder.mkdirs()) { Log.e("TAG", "Create dir in sdcard failed"); return; }} else{ File listOfFiles[] = folder.listFiles(); for (int i = 0; i < listOfFiles.length; i++) { if (listOfFiles[i].isFile()) { files = listOfFiles[i].getName(); if (files.endsWith(".txt") || files.endsWith(".TXT")) { if((files.length()-1)>i){resizeArray(TxtFiles, files.length()+10);} TxtFiles[i]=listOfFiles[i]; System.out.println(TxtFiles[i]); } } }} } private void updateCounter(int Pozicija) { if(Pozicija<0){Toast.makeText(getApplicationContext(), R.string.LastTxt, 5).show(); mCounter++;} else if(TxtFiles[mCounter]!=null){ TextToShow = getContents(TxtFiles[mCounter]); if(TextToShow.length()>80)TextToShow=TextToShow.substring(0, 80); mSwitcher.setText(TextToShow); System.out.println(Pozicija); } else mCounter--; } static public String getContents(File aFile) { //...checks on aFile are elided StringBuilder contents = new StringBuilder(); try { //use buffering, reading one line at a time //FileReader always assumes default encoding is OK! BufferedReader input = new BufferedReader(new FileReader(aFile)); try { String line = null; //not declared within while loop /* * readLine is a bit quirky : * it returns the content of a line MINUS the newline. * it returns null only for the END of the stream. * it returns an empty String if two newlines appear in a row. */ while (( line = input.readLine()) != null){ contents.append(line); contents.append(System.getProperty("line.separator")); } } finally { input.close(); } } catch (IOException ex){ ex.printStackTrace(); } return contents.toString(); } And I am able to write contents of those files though LogCat! Any ideas?

    Read the article

  • Enterprise Process Maps: A Process Picture worth a Million Words

    - by raul.goycoolea
    p { margin-bottom: 0.08in; }h1 { margin-top: 0.33in; margin-bottom: 0in; color: rgb(54, 95, 145); page-break-inside: avoid; }h1.western { font-family: "Cambria",serif; font-size: 14pt; }h1.cjk { font-family: "DejaVu Sans"; font-size: 14pt; }h1.ctl { font-size: 14pt; } Getting Started with Business Transformations A well-known proverb states that "A picture is worth a thousand words." In relation to Business Process Management (BPM), a credible analyst might have a few questions. What if the picture was taken from some particular angle, like directly overhead? What if it was taken from only an inch away or a mile away? What if the photographer did not focus the camera correctly? Does the value of the picture depend on who is looking at it? Enterprise Process Maps are analogous in this sense of relative value. Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment, from the biggest picture representations down to the lowest level required or desired for the particular project type, scope and objectives. The Enterprise Process Map serves as an entry point for the process architecture and is defined: the single highest level of process mapping for an organization. It is constructed and evaluated during the Strategy Phase of the Business Process Management Lifecycle. (see Figure 1) Fig. 1: Business Process Management Lifecycle Many organizations view such maps as visual abstractions, constructed for the single purpose of process categorization. This, in turn, results in a lesser focus on the inherent intricacies of the Enterprise Process view, which are explored in the course of this paper. With the main focus of a large scale process documentation effort usually underlying an ERP or other system implementation, it is common for the work to be driven by the desire to "get to the details," and to the type of modeling that will derive near-term tangible results. For instance, a project in American Pharmaceutical Company X is driven by the Director of IT. With 120+ systems in place, and a lack of standardized processes across the United States, he and the VP of IT have decided to embark on a long-term ERP implementation. At the forethought of both are questions, such as: How does my application architecture map to the business? What are each application's functionalities, and where do the business processes utilize them? Where can we retire legacy systems? Well-developed BPM methodologies prescribe numerous model types to capture such information and allow for thorough analysis in these areas. Process to application maps, Event Driven Process Chains, etc. provide this level of detail and facilitate the completion of such project-specific questions. These models and such analysis are appropriately carried out at a relatively low level of process detail. (see figure 2) Fig. 2: The Level Concept, Generic Process HierarchySome of the questions remaining are ones of documentation longevity, the continuation of BPM practice in the organization, process governance and ownership, process transparency and clarity in business process objectives and strategy. The Level Concept in Brief Figure 2 shows a generic, four-level process hierarchy depicting the breakdown of a "Process Area" into progressively more detailed process classifications. The number of levels and the names of these levels are flexible, and can be fit to the standards of the organization's chosen terminology or any other chosen reference model that makes logical sense for both short and long term process description. It is at Level 1 (in this case the Process Area level), that the Enterprise Process Map is created. This map and its contained objects become the foundation for a top-down approach to subsequent mapping, object relationship development, and analysis of the organization's processes and its supporting infrastructure. Additionally, this picture serves as a communication device, at an executive level, describing the design of the business in its service to a customer. It seems, then, imperative that the process development effort, and this map, start off on the right foot. Figuring out just what that right foot is, however, is critical and trend-setting in an evolving organization. Key Considerations Enterprise Process Maps are usually not as living and breathing as other process maps. Just as it would be an extremely difficult task to change the foundation of the Sears Tower or a city plan for the entire city of Chicago, the Enterprise Process view of an organization usually remains unchanged once developed (unless, of course, an organization is at a stage where it is capable of true, high-level process innovation). Regardless, the Enterprise Process map is a key first step, and one that must be taken in a precise way. What makes this groundwork solid depends on not only the materials used to construct it (process areas), but also the layout plan and knowledge base of what will be built (the entire process architecture). It seems reasonable that care and consideration are required to create this critical high level map... but what are the important factors? Does the process modeler need to worry about how many process areas there are? About who is looking at it? Should he only use the color pink because it's his boss' favorite color? Interestingly, and perhaps surprisingly, these are all valid considerations that may just require a bit of structure. Below are Three Key Factors to consider when building an Enterprise Process Map: Company Strategic Focus Process Categorization: Customer is Core End-to-end versus Functional Processes Company Strategic Focus As mentioned above, the Enterprise Process Map is created during the Strategy Phase of the Business Process Management Lifecycle. From Oracle Business Process Management methodology for business transformation, it is apparent that business processes exist for the purpose of achieving the strategic objectives of an organization. In a prescribed, top-down approach to process development, it must be ensured that each process fulfills its objectives, and in an aggregated manner, drives fulfillment of the strategic objectives of the company, whether for particular business segments or in a broader sense. This is a crucial point, as the strategic messages of the company must therefore resound in its process maps, in particular one that spans the processes of the complete business: the Enterprise Process Map. One simple example from Company X is shown below (see figure 3). Fig. 3: Company X Enterprise Process Map In reviewing Company X's Enterprise Process Map, one can immediately begin to understand the general strategic mindset of the organization. It shows that Company X is focused on its customers, defining 10 of its process areas belonging to customer-focused categories. Additionally, the organization views these end-customer-oriented process areas as part of customer-fulfilling value chains, while support process areas do not provide as much contiguous value. However, by including both support and strategic process categorizations, it becomes apparent that all processes are considered vital to the success of the customer-oriented focus processes. Below is an example from Company Y (see figure 4). Fig. 4: Company Y Enterprise Process Map Company Y, although also a customer-oriented company, sends a differently focused message with its depiction of the Enterprise Process Map. Along the top of the map is the company's product tree, overarching the process areas, which when executed deliver the products themselves. This indicates one strategic objective of excellence in product quality. Additionally, the view represents a less linear value chain, with strong overlaps of the various process areas. Marketing and quality management are seen as a key support processes, as they span the process lifecycle. Often, companies may incorporate graphics, logos and symbols representing customers and suppliers, and other objects to truly send the strategic message to the business. Other times, Enterprise Process Maps may show high level of responsibility to organizational units, or the application types that support the process areas. It is possible that hundreds of formats and focuses can be applied to an Enterprise Process Map. What is of vital importance, however, is which formats and focuses are chosen to truly represent the direction of the company, and serve as a driver for focusing the business on the strategic objectives set forth in that right. Process Categorization: Customer is Core In the previous two examples, processes were grouped using differing categories and techniques. Company X showed one support and three customer process categorizations using encompassing chevron objects; Customer Y achieved a less distinct categorization using a gradual color scheme. Either way, and in general, modeling of the process areas becomes even more valuable and easily understood within the context of business categorization, be it strategic or otherwise. But how one categorizes their processes is typically more complex than simply choosing object shapes and colors. Previously, it was stated that the ideal is a prescribed top-down approach to developing processes, to make certain linkages all the way back up to corporate strategy. But what about external influences? What forces push and pull corporate strategy? Industry maturity, product lifecycle, market profitability, competition, etc. can all drive the critical success factors of a particular business segment, or the company as a whole, in addition to previous corporate strategy. This may seem to be turning into a discussion of theory, but that is far from the case. In fact, in years of recent study and evolution of the way businesses operate, cross-industry and across the globe, one invariable has surfaced with such strength to make it undeniable in the game plan of any strategy fit for survival. That constant is the customer. Many of a company's critical success factors, in any business segment, relate to the customer: customer retention, satisfaction, loyalty, etc. Businesses serve customers, and so do a business's processes, mapped or unmapped. The most effective way to categorize processes is in a manner that visualizes convergence to what is core for a company. It is the value chain, beginning with the customer in mind, and ending with the fulfillment of that customer, that becomes the core or the centerpiece of the Enterprise Process Map. (See figure 5) Fig. 5: Company Z Enterprise Process Map Company Z has what may be viewed as several different perspectives or "cuts" baked into their Enterprise Process Map. It has divided its processes into three main categories (top, middle, and bottom) of Management Processes, the Core Value Chain and Supporting Processes. The Core category begins with Corporate Marketing (which contains the activities of beginning to engage customers) and ends with Customer Service Management. Within the value chain, this company has divided into the focus areas of their two primary business lines, Foods and Beverages. Does this mean that areas, such as Strategy, Information Management or Project Management are not as important as those in the Core category? No! In some cases, though, depending on the organization's understanding of high-level BPM concepts, use of category names, such as "Core," "Management" or "Support," can be a touchy subject. What is important to understand, is that no matter the nomenclature chosen, the Core processes are those that drive directly to customer value, Support processes are those which make the Core processes possible to execute, and Management Processes are those which steer and influence the Core. Some common terms for these three basic categorizations are Core, Customer Fulfillment, Customer Relationship Management, Governing, Controlling, Enabling, Support, etc. End-to-end versus Functional Processes Every high and low level of process: function, task, activity, process/work step (whatever an organization calls it), should add value to the flow of business in an organization. Suppose that within the process "Deliver package," there is a documented task titled "Stop for ice cream." It doesn't take a process expert to deduce the room for improvement. Though stopping for ice cream may create gain for the one person performing it, it likely benefits neither the organization nor, more importantly, the customer. In most cases, "Stop for ice cream" wouldn't make it past the first pass of To-Be process development. What would make the cut, however, would be a flow of tasks that, each having their own value add, build up to greater and greater levels of process objective. In this case, those tasks would combine to achieve a status of "package delivered." Figure 3 shows a simple example: Just as the package can only be delivered (outcome of the process) without first being retrieved, loaded, and the travel destination reached (outcomes of the process steps), some higher level of process "Play Practical Joke" (e.g., main process or process area) cannot be completed until a package is delivered. It seems that isolated or functionally separated processes, such as "Deliver Package" (shown in Figure 6), are necessary, but are always part of a bigger value chain. Each of these individual processes must be analyzed within the context of that value chain in order to ensure successful end-to-end process performance. For example, this company's "Create Joke Package" process could be operating flawlessly and efficiently, but if a joke is never developed, it cannot be created, so the end-to-end process breaks. Fig. 6: End to End Process Construction That being recognized, it is clear that processes must be viewed as end-to-end, customer-to-customer, and in the context of company strategy. But as can also be seen from the previous example, these vital end-to-end processes cannot be built without the functionally oriented building blocks. Without one, the other cannot be had, or at least not in a complete and organized fashion. As it turns out, but not discussed in depth here, the process modeling effort, BPM organizational development, and comprehensive coverage cannot be fully realized without a semi-functional, process-oriented approach. Then, an Enterprise Process Map should be concerned with both views, the building blocks, and access points to the business-critical end-to-end processes, which they construct. Without the functional building blocks, all streams of work needed for any business transformation would be lost mess of process disorganization. End-to-end views are essential for utilization in optimization in context, understanding customer impacts, base-lining all project phases and aligning objectives. Including both views on an Enterprise Process Map allows management to understand the functional orientation of the company's processes, while still providing access to end-to-end processes, which are most valuable to them. (See figures 7 and 8). Fig. 7: Simplified Enterprise Process Map with end-to-end Access Point The above examples show two unique ways to achieve a successful Enterprise Process Map. The first example is a simple map that shows a high level set of process areas and a separate section with the end-to-end processes of concern for the organization. This particular map is filtered to show just one vital end-to-end process for a project-specific focus. Fig. 8: Detailed Enterprise Process Map showing connected Functional Processes The second example shows a more complex arrangement and categorization of functional processes (the names of each process area has been removed). The end-to-end perspective is achieved at this level through the connections (interfaces at lower levels) between these functional process areas. An important point to note is that the organization of these two views of the Enterprise Process Map is dependent, in large part, on the orientation of its audience, and the complexity of the landscape at the highest level. If both are not apparent, the Enterprise Process Map is missing an opportunity to serve as a holistic, high-level view. Conclusion In the world of BPM, and specifically regarding Enterprise Process Maps, a picture can be worth as many words as the thought and effort that is put into it. Enterprise Process Maps alone cannot change an organization, but they serve more purposes than initially meet the eye, and therefore must be designed in a way that enables a BPM mindset, business process understanding and business transformation efforts. Every Enterprise Process Map will and should be different when looking across organizations. Its design will be driven by company strategy, a level of customer focus, and functional versus end-to-end orientations. This high-level description of the considerations of the Enterprise Process Maps is not a prescriptive "how to" guide. However, a company attempting to create one may not have the practical BPM experience to truly explore its options or impacts to the coming work of business process transformation. The biggest takeaway is that process modeling, at all levels, is a science and an art, and art is open to interpretation. It is critical that the modeler of the highest level of process mapping be a cognoscente of the message he is delivering and the factors at hand. Without sufficient focus on the design of the Enterprise Process Map, an entire BPM effort may suffer. For additional information please check: Oracle Business Process Management.

    Read the article

  • htaccess Redirect 301 problem

    - by Marty
    I have the following in my .htaccess file- Redirect 301 / http://www.foo.com/south-carolina-real-estate/ Redirect 301 /related/aiken-sc.htm?tkn=MXNDbGxQjEAKEwj0qrmMz_OYAhUdBGoKHY43MKwYASAFMKCTDDgNUKCTDFDLuosP http://www.foo.com/south-carolina-real-estate/ Redirect 301 /related/aiken-sc.htm http://www.foo.com/south-carolina-real-estate/ Redirect 301 /related/spartanburg.htm?tkn=0bzl_HmfIxIKEwj0qrmMz_OYAhUdBGoKHY43MKwYASADMKCTDDgNUKCTDFDLuosP http://www.foo.com/south-carolina-real-estate/ Redirect 301 /related/spartanburg.htm http://www.foo.com/south-carolina-real-estate/ But when I visit http://www.url.com/related/aiken-sc.htm I get the following URL in the browser- http://www.foo.com/south-carolina-real-estate/related/aiken-sc.htm Not sure what the problem is, this works fine on other sites...?

    Read the article

  • Can a Mac Machine be used by Multiuser at same time?

    - by Amit Jain
    Hi All, Can a mac machine be used by different user at the same time ? I mean to say that we have a single mac machine but 3 users can they access the same machine remotely at the same time for developing application on iPhone or Mac. Does Mac OS X server allows us to do this ? If Yes please provide me with suitable link. Thanks Amit

    Read the article

  • Pinging switch results in alternating high/low response times

    - by Paul Pringle
    We have two switches that are behaving strangely. When I ping them the responses alternate between high and low results: C:\Users\paul>ping sw-linksys1 -t Pinging sw-linksys1.sep.com [172.16.254.235] with 32 bytes of data: Reply from 172.16.254.235: bytes=32 time=39ms TTL=64 Reply from 172.16.254.235: bytes=32 time=154ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=142ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=143ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=146ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=152ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=153ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=153ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Other switches in the network behave normally. I've rebooted the switches, but the behavior still is there with the ping. Any ideas on how to troubleshoot this? Thanks!

    Read the article

  • How can ASP.NET's "Request Wait Time" be 0 when "Requests Queued" is consistently in the hundreds?

    - by ondrej
    I'm curious why Performance Monitor claims I always have a few hundred ASP.NET 3.5 requests "queued". The "Requests Queued" "ASP.NET v2.0.50727" performance counter is hovering in the few-hundred range despite the fact "Request Wait Time" is consistently 0. If each and every request never waits even a fraction of a millisecond, how could it be in the queue? The "ASP.NET Apps v2.0.50727" counters for "Requests In Application Queue" and "Request Wait Time" are always 0.

    Read the article

  • How to make IE 9 stop reading all the fonts every time?

    - by Mehrdad
    Process Monitor showed me that IE 9 accesses every time it loads on my system, which makes it have a 1- to 2-second delay every time it loads. (I tested this by removing my fonts -- it loaded much more quickly.) It gets a little annoying, because it's the best handler I have for MHT files, so I don't want to switch to something else. Is there any way to make it avoid doing that? (The "Hide Fonts" feature in Windows 7 doesn't work.)

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >